When You Train Your SOC, Don’t Forget the AI

It’s Time to Prioritize SOC Range Training

Team simulation training requires inclusion of Artificial Intelligence (AI) security platforms to address emerging threats.

By Dr. Edward Amoroso, CEO TAG Infosphere and Research Professor, NYU

Introduction

In the last couple of years, our team at TAG seen a massive surge in the adoption of AI-based security technologies. Enterprises are purchasing AI-driven threat detection, AI-based risk analysis, and AI-generated incident response recommendations, and this is an exciting development in our industry. But the reality is that buying an AI platform doesn’t automatically make your SOC team ready to use it effectively.

Just as with any powerful new technology, mastery comes through practice. That’s why today’s live-fire simulation training must evolve to explicitly incorporate AI-based security platforms. Enterprise security leaders need to invest the time and energy to train their human defenders not only to recognize traditional threats but also to operate, trust, validate, and at times override AI recommendations.

The Rise of AI in the SOC

It is not always easy to spot trends as they are occurring, but in the context of AI, we find that identification of on-going decisions, changes, and issues have been relatively easy to spot. In particular we have noticed that most SOC teams now have at least one or two AI-enhanced platforms in their stack, which implies the need to understand how to use such technology. Specifically, these systems are capable of the following functionality:

  • Prioritizing SOC alerts using machine learning.

  • Suggesting the best possible response actions in real time.

  • Automatically correlating events across vast datasets.

  • Even recommending policy changes based on behavioral analytics.

While these capabilities are valuable, they introduce new complexities. Analysts must understand how these AI models reach their conclusions. They must recognize when a model’s recommendation is spot-on, and when human judgment must override automation to avoid catastrophic errors. Our view is that SOC training become an essential element of the equation in dealing with this new challenge from AI.

Simulation Training: The Missing Piece

Unfortunately, most traditional training exercises have not accounted for these realities – and this is understandable given the relatively recent introduction of AI to the SOC. As such, SOC teams tend to train their analysts to react to threats manually, without the assistance (or complication) of AI input, and that’s a gap that must be closes. Forward-looking platforms like Cloud Range have recognized this and are evolving their exercises to support the following:

  • Inject AI Decision Points: Teams must decide when to trust AI recommendations versus when to dig deeper.

  • Simulate AI Errors: Exercises include scenarios where the AI suggests incorrect actions, training teams to spot and correct these issues.

  • Model Adversarial AI Attacks: Simulations where attackers attempt to poison or manipulate AI models, preparing teams for emerging threats.

Learning to Work With (Not Against) the Machine

The most effective SOC teams in the coming years will be those who know how to partner with AI in order to achieve the following key operational objectives, each of which we believe will soon characterize a new normal in SOC support:

  • Understand Bias and Limitations: Teams must grasp that AI models are only as good as their training data and that biases can creep in.

  • Validate and Corroborate: Analysts must learn to validate AI-generated alerts against independent telemetry before acting.

  • Tune the Models: Security engineers must practice adjusting AI thresholds and feedback loops based on organizational needs.

All of these skills can, and should, be developed through live-fire range exercise, and our observation at TAG is that Cloud Range does a particularly effective job in each of these important areas.

The Risk of Blind Trust

Blindly following AI outputs without human scrutiny is a recipe for disaster. Adversaries already recognize this and are developing attack strategies aimed at tricking or manipulating AI systems. Through targeted range training, SOC teams can learn how to:

  • Spot adversarial manipulation attempts.

  • Recognize when AI is being used against them.

  • Maintain vigilance and critical thinking even in an automated environment.

Conclusion

When we are honest, we must acknowledge that the future of cybersecurity will not be purely human or purely machine. Rather, it will be a hybrid model where skilled people leverage powerful AI tools. But that synergy doesn’t happen automatically. It requires deliberate, structured training, and SOC management would be wise to understand this new requirement in the immediate term.

About TAG

Recognized by Fast Company, TAG is a trusted next-generation research and advisory company that utilizes an AI-powered SaaS platform to deliver on-demand insights, guidance, and recommendations to enterprise teams, government agencies, and commercial vendors in cybersecurity and artificial intelligence.


Download the AI-driven Cyber Simulations One Pager and learn how SOC teams train against polymorphic malware, deepfake phishing, and AI-powered threats.

Previous
Previous

Cloud Range Debuts at AFCEA TechNet Augusta 2025, Expands Mission to Fortify Federal Cyber Defenses

Next
Next

It’s Time to Prioritize SOC Range Training