If your organization starts using generative AI for security ops (like threat hunting or incident response), does that impact how you think about your team's roles/responsibilities? Would you expect to need fewer staff for SecOps, or even more? More or fewer high-skilled employees?
Sort by:
Yes. It will be concerning; if the employees are not trained enough. I would be limit this services till the time we don't identify the potential risks.
In short, 'yes', as we leverage AI for security ops the role of first-level soc analyst becomes redundant. Basic questions that first-level analyst performs such as reviewing the logs and creating events/alerts, can be automated based on prompt questions that can be responded to by the LLM model or ChatBot AI functionality. Even if you pay extra for the capability, the human expense is reduced.
I am not saying it today, but that is how we see it in the next 12 to 18 months as the features mature.
I'd argue it's relatively unchanged. As the industry continues to consolidate tools and automate security functions, the threat actors are also innovating and using the same tools against you. In my experience we are simply shifting resources from older, but still necessary, security tools as they mature into newer threat defenses.