How do you calculate ROI for AI investments in security, given that benefits to risk reduction can be hard to quantify? What metrics best demonstrate the extent to which they’re providing value, both from your own POV as the security leader, and from the business’s POV?
Sort by:
We began integrating secure AI with Copilot about two and a half years ago and use a two-tier ROI framework, benchmarking against the market and our competition. The first tier is operational ROI, focusing on hard metrics such as FTE hours saved, mean time to triage alerts, manual steps avoided, tooling license savings, storage costs, egress, incident containment costs, downtime, and escalation rates. The second tier is risk reduction, which is critical for a company of our size and sensitivity. We assess expected loss reduction, likelihood of incidents, and whether incidents could have been prevented by AI. Metrics tracked include mean time to detect, mean time to repair, escalation rates, analyst touches per incident, alert precision, false positive rate, auto-closure accuracy, batch cycle times, SLA percentages for high-risk vulnerabilities, automation coverage, and unit economics like cost per triaged alert and cost per protected endpoint. This holistic approach guides our ROI calculations for any AI product.
We take a broad approach, considering not only internal security but also empowering end users, who are ultimately the human shield protecting the company. Our strategy is three-tiered: enabling end users to use AI (such as Copilot), training them, and then exploring further solutions and capabilities. It’s important to recognize that AI is not just a technology issue; it’s a business issue. The business must provide a use case for technology to enable. Without a business use case, technology alone cannot solve day-to-day problems. Our journey began about six or seven months ago and continues to mature, extending beyond security to those around security who leverage our capabilities.
Our main use of AI is to drive efficiency, freeing up people’s time from everyday tasks so they can focus on more valuable work. For a small team, increasing efficiency is essential. For example, we look at how long it takes for our help desk to resolve calls or investigate incidents and then assess which AI tools can reduce that time. We receive alerts from many devices and aim to create context but building that context requires significant input. Often, we lack the time to provide all the necessary context, so the AI operates with only part of the information it needs. By increasing efficiency in routine tasks, we gain more time to build and refine that context. Our approach to ROI is centered on these efficiency gains.
There isn’t a magic answer for ROI, but we invest considerable time and effort in leveraging AI-based capabilities across our governance, risk, and compliance domains. For example, we use AI to accelerate processes within our SOC, and we’re experimenting with ways to help our architecture and engineering teams increase velocity while ensuring focus on the right areas. Many of our metrics are qualitative, such as how an AI-enabled system speeds up a process. Speed is a significant metric, as is accuracy, particularly when a human verifies the AI’s output before submission, such as in vendor reviews or third-party risk assessments. Internally, we use Google Gemini to create tools for rapid and robust vendor assessment. If a process previously took 12 hours and now takes two, that’s a metric we share with leadership. This is how we approach ROI at Thoughtworks.

AI is just another tool, and a business use case must drive its adoption, including consideration of potential security spend and the human-in-the-loop element. While there is excitement around AI’s potential, it’s important to focus on solving actual business problems rather than treating technology as the goal itself. Our approach is to determine the cost of implementing a business process or fix, not just the tool’s cost, and then assess whether it meets current needs in a reasonable timeframe. The full picture is essential, rather than relying on long-term promises that may never materialize.