How are you deciding which SOC or security operations work should be augmented or even fully automated with AI? Which tasks should remain entirely under human control and which should only require human oversight?

463 viewscircle icon6 Comments
Sort by:
Associate Vice President, Information Technology & CISO in Education8 days ago

There are several straightforward examples of tasks suitable for AI augmentation or automation. Typically, these are the activities that tend to bog down tier one analysts, such as log enrichment, repetitive investigation, triage, and noise filtering. These processes are ideal for AI, as they are routine and can be efficiently handled by automated systems.

Human capital, on the other hand, should be focused on the more unique and context-specific aspects within the organization. Analysts bring contextual knowledge that AI cannot replicate, such as understanding the behavior of privileged users in sensitive departments or recognizing routine activities that might otherwise appear suspicious to an AI model. Tasks that require years of institutional experience and nuanced judgment, like investigating anomalous behaviors or double-checking flagged files, are best reserved for human analysts. In summary, AI should manage the simpler, repetitive tasks, while humans address the complex, context-driven scenarios.

1 Reply
no title8 days ago

Building on John’s points, it is important to assign low-probability or highly unique tasks to human talent. Engaging analysts with these challenging and varied responsibilities keeps them interested and invested in their work. If analysts are left with only the mundane, repetitive tasks, retention becomes an issue, as they are less likely to stay with the organization. Ultimately, having skilled people focused on the meaningful, unique aspects of security operations is essential, regardless of the tools in place.

15 days ago

We rely on third parties for most of our SOC and SOAR functions. During our reviews, we try to understand what is being automated in the background, particularly for tasks that are reactions to detections. For example, we look at automating actions such as blocking emails or removing malicious emails from inboxes after they have been delivered. Our current focus is on determining which AI tools can help us identify larger problems or support incident response activities in a way that allows us to interact conversationally with the system.

I am not strictly controlling what should be automated versus what should require human interaction. However, I believe it is essential to have human oversight over any decisions that are automated, to ensure we do not simply accept machine decisions as the final word. It can be difficult to obtain complete details from third parties, but there are certain tasks we want to keep under our own control. Ultimately, I want oversight over automated decisions to maintain accountability.

1 Reply
no title15 days ago

I agree, it is incredibly important to have human oversight over automated decisions. Automation is useful for tasks such as isolating a device or responding to malware detected on an endpoint, but I still want an analyst to review these actions and intervene if necessary. Incident response investigations, in particular, require significant human oversight due to liability concerns, including lawsuits and legal privilege, so these are areas where I would be especially vigilant about involving analysts.

VP, IT in Manufacturing19 days ago

There is a distinction between what machine learning and AI tools can do and how much agency we grant them. The user’s role remains critical. Even if an AI can isolate a device or perform a SOAR action, governance over those actions must be clear. We are currently discussing which actions a tier one SOC agent or AI can take versus those that require oversight. There is no definitive answer yet, as there are many possible actions based on different scenarios. Recently, our steering committee discussed creating a top ten list of tasks to help decide, from a business perspective, which are acceptable to automate. As a manufacturer, automating certain actions could have significant consequences, such as revenue loss if a production line is taken down. We are carefully evaluating which actions are appropriate for automation and seeking business approval for each.

CISO in Finance (non-banking)21 days ago

One area where we allow automation is the removal of malicious emails. When a phishing alert is triggered by a user, our EDR reviews it, and if the email is confirmed as malicious, we permit automated removal of similar emails from other inboxes. This is a clear-cut decision: once we know it is a bad email, we want it out of inboxes immediately, without waiting for an analyst to manually intervene. 

Content you might like

Yes, BYOAI Copilot license use is allowed for all/most roles 25%

BYOAI Copilot license use is restricted to certain roles 46%

No, BYOAI Copilot license use is blocked 25%

N/A4%

View Results

Invest more in eCommerce32%

Maintain the current investment in eCommerce63%

Invest less in eCommerce4%

View Results