What strategies can organizations use to address the added compliance burden that comes with AI-enabled tools? How do you keep up with changes to both the vendor’s Terms of Service as well as various data privacy laws?
Sort by:
We addressed this by working closely with our data analytics team to develop templates for business stakeholders who own the data. These templates include metadata tags, such as specifying whether data can be used internally or adding a “no AI” tag. This approach draws on requirements from GDPR and also considers the need to protect intellectual property. Rather than trying to keep up with every AI tool accessing our data pools, we focus on keeping all data in silos within our Google environment and controlling access through additional metadata.
My approach is to make data the central focus. Once you know what data you have, you can drill down into the relevant jurisdictions and identify any overarching regulations, such as GDPR, which is person-based rather than just database-based. Ideally, you should find out what the business intends to do before rolling anything out. If you can gather this information in advance, you can identify potential legal issues and involve legal teams early in the process. This allows legal and compliance to work on the different aspects in parallel, rather than risking a situation where something is rolled out only for legal to later block it due to jurisdictional restrictions. Getting legal and compliance involved early, and requiring the business to specify exactly what data they want to use, helps streamline the compliance process.
This is a significant challenge for us as a global organization, since we must comply with a wide range of regulations, including Canadian regulations, GDPR from Europe, the UK’s distinct regulations, as well as Australian, Singaporean, and various US regulations. One particular challenge is dealing with vendors who use click-through agreements, which can change after a point-in-time assessment. Sometimes, vendors will enable AI features without notifying us, and we only discover these changes after the fact, which forces us to chase due diligence and evaluate the impact retroactively. It often feels like we are chasing a moving train, and managing this compliance landscape is an ongoing challenge.
As part of our TPRM process, we added a series of questions specifically focused on AI. We approach vendors differently based on their responses, with particular attention to issues like the use of public LLMs or training on customer data. While this may not directly address every compliance issue, it is one way we have adapted to the increased burden.
We have also explored using AI to help manage the compliance workload, such as updating policies, reviewing documentation, and assisting with authoring. Of course, a human always reviews and finalizes these materials, but AI helps us handle the burden itself.
We use a two-fold approach. From a governance perspective, we have a committee that reviews AI-enabled tools at a high level. We also maintain close collaboration with compliance, legal, and other relevant teams. I agree with the approach of considering the specific regulatory environment of each country, as some regulations like GDPR are personal-data focused, while others are more concerned with data sets. We have specialized teams for different regulations, which helps us understand the regulatory posture in each country when deploying or using AI tools. Ultimately, collaboration between departments and having the right internal expertise are key to protecting the company’s interests.