What are the basic controls, especially technical ones, that are considered most important for mitigating GenAI risks in the enterprise?

337 viewscircle icon4 Comments
Sort by:
Group Director of Information Security in Bankinga year ago

Basic Controls, especially technical ones changes from one use case to another. You need to concentrate on use cases. For example:

1. If you are using the most often deployed Retrieval Augmented Generation (RAG) in Azure AI Search, below is a good source of basic controls to build within:
https://learn.microsoft.com/en-us/azure/search/search-security-overview

2. If you have subscribed to Copilot for Microsoft 365, then take a look at ' How does Microsoft Copilot for Microsoft 365 protect organizational data?' This will give guidance for some controls to consider.
https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy

3. Controls market is split between products that claim to mitigate GenAI risks in areas of Content Anomaly Detection, Privacy & Data Protection and AI Application Security. Calypso AI, Lasso Security and Robust Intelligence products comes near to covering 2 of the 3 areas. As it's a highly evolving market, keep evaluating.

Lightbulb on1
AVP of Information Securitya year ago

Community members, this is a great topic and I think it depends upon the industry that each of us is in. For healthcare, cardholder, or financial services I would recommend the following controls:
1. An AI policy or security stance outlining the rules of engagement
2. DLP scanning of any repositories in order to identify whether classified data exists within the data sources that may be leveraged for any LLM.
2. A CASB endpoint software configuration that allows you to control any uploads of classified data is being uploaded to site categorized as AI in your Secure Web Gateway

Lightbulb on1
VP of Information Security in Softwarea year ago

I agree with Andrea's emphasis on education and communication. Given the novelty of GenAI technologies, there isn't a universally accepted method for handling them. However, treating them like any other new technology—by educating and discussing their implications thoroughly—seems to be an effective strategy.

Lightbulb on1
CISO in Energy and Utilitiesa year ago

Initially, when we encountered the explosion of chatbot technologies like ChatGPT, we had to act swiftly with the limited resources available. One fundamental approach we adopted was to define clear guidelines for employees regarding their usage of these tools. We emphasized the importance of understanding not only how to use the tools but also the implications of the information they generate. This education helps employees discern whether the responses they receive are peculiar or misleading, which is crucial when they use this data for business or sales decisions. We chose not to block access to these technologies but instead focused on responsible usage.

We also took steps to ensure that any copying of confidential content by these tools was highlighted as a significant risk. Our next step was to seek a solution that provided a safer, more controlled environment, which we could contractually protect, rather than leaving it to the unpredictable nature of the broader internet.

Lightbulb on1

Content you might like

Support future growth36%

Automate manual processes59%

Demonstrate compliance49%

Reduce risk exposure43%

Improve customer experience16%

Reduce costs13%

View Results

I am a huge fan of this technology20%

I find this technology very useful, yet have some slight doubts64%

I have quite a few doubts about this technology12%

I am not a fan of this technology at all2%

View Results