For third-party applications, is your organization putting in place any systemic review to identify new AI features that have been introduced, so that they can undergo an AI risk evaluation?
Sort by:
Thank you for this response. We feel good that we are catching AI capabilities in net-new applications / vendors, but our existing vendors are turning on capabilities with varying degrees of formal notice and configurability and we're trying to get a better handle on that.
This is in our to-do list, but we have not started to do this yet. However, this would be a combination of our Enterprise Digital team (identify AI capabilities in 3rd party systems we want to utilize) and our Cybersecurity team (risk evaluation).
no we dont, but we block AI Apps unknown on our firewall. So as long as the Firewall is able to detect the AI Application, it will be blocked. Good Example is Adobe Acrobat - the AI feature of sending (confidential) documents to the Cloud for analysis was blocked.
Negative Example ist MS Copilot - depending on which copilot feature and access method, there are some loopholes not detected by the firewall as it is sometimes classified as Bing Search etc.
Thanks, this is very helpful and I appreciate the examples. I'm referring to apps like Jira, AuditBoard, ServiceNow, Workday, etc., that add AI capabilities to the app... where it's not an "AI" app but they are adding AI capabilities.
We have introduced a new AI-specific risk analysis step for all new applications, as an add-on to the standard cyber risk evaluation. It's still a work in progress to calibrate how much effort to spend on these new risk evaluations for various kinds of apps, but it is a good first step