What measures have you implemented to address security and compliance concerns when deploying GenAI tools? How do you ensure responsible use of these technologies?
Sort by:
We've blocked access to certain sites within the company, allowing only Microsoft Copilot. Copilot is mandated on the work site, ensuring that any input remains within our model. For projects or teams using AI, we assess whether a cyber approach or legal approval is needed, based on the use case. Customer-facing applications require legal approval due to additional web pages. Our approach varies by use case, ensuring necessary approvals before deployment. Using our own Azure services within our VPN typically avoids compliance issues, but third-party tools require extensive security, legal, and compliance reviews.
We began by establishing policies and requiring training for anyone using GenAI technologies. This includes signing off and testing to ensure understanding of responsible use. Each associate has a level of responsibility, complemented by cybersecurity measures to restrict certain actions, such as blocking sites like ChatGPT. For small cohorts using Copilot, we provide clear guidelines on permissible actions, supported by training from vendor specialists. Our approach involves continuous training, as people can inadvertently circumvent controls. AI training is integrated into our regular and annual training to reinforce the importance of responsible use.
We have an AI task force that reviews all AI use cases, including legal compliance, cybersecurity, data science, and data governance. Despite this, we've encountered issues, such as ChatGPT surfacing documents on SharePoint sites without locked-down permissions. While protected by firewalls, the way information is presented can confuse users. Enterprise-grade platforms with embedded AI tools need increased scrutiny and testing before entering the environment. It's crucial to anticipate how people will use these tools at scale, requiring constant revisiting to ensure appropriate use.