AI vs. Cybercriminals: Who Wins the Race in Next-Gen Threat Detection?
We are in an unparalleled increase in the sophistication of cyber threats. According to IBM’s 2024 Cost of a Data Breach Report, the global average cost of a data breach jumped to $4.88 million, a 10% spike representing the highest increase since the pandemic. Malicious insider attacks proved the most expensive at $4.99 million per breach, while organizations facing security staffing shortages saw costs increase by $1.76 million on average compared to those with adequate staffing.
As a software development engineer focused on cybersecurity automation and threat intelligence, I’ve seen how this technological arms race is reforming security operations and threatening the future of cybersecurity.
Enabling Cybercriminals
Adversarial AI enables cybercriminals to circumvent traditional detection systems. No longer relying on brute force attacks or known vulnerabilities, they are developing sophisticated methods to manipulate machine learning models designed to catch them.
The methods are both sophisticated and alarming. They involve studying the decision-making processes of security AI systems and then using that knowledge to build better, more effective malware. Security AI is set up to analyze big data, making it possible for security teams to understand what normal user or system activity looks like. When attackers build their malware, they’re now planning for — and often successfully evading — the kinds of big data analyses that security teams hope will cut through the noise to find the bad guys.
Although there hasn’t been a direct attack on our machine learning models, we have considered this danger when designing our systems. In my work, we continuously retrain our models based on fresh threat intelligence, and we employ adversarial testing to mimic how attackers attempt to evade detection. We employ anomaly detection, contextual risk grading and behavioral analytics instead of static signatures. This guarantees that our system will identify unusual activity even if an attacker tries to change inputs to get around security. We significantly lessen the likelihood that attackers will be able to fool our AI models by combining multiple detection systems.
The reality facing security operations today is stark. Human analysts, no matter how talented, cannot process the overwhelming deluge of threats emerging in real time. Like everyone else in my field, I have watched this gap widen steadily.
Security operations centers face millions of events daily, making a comprehensive manual review mathematically impossible. A 2024 study by Mandiant revealed that enterprise security operations centers process an average of 11,000 alerts daily, with analysts able to thoroughly investigate less than 4% of these signals. While human analysis takes minutes to even hours at times, modern attacks execute in mere seconds. The threat landscape evolves too quickly for even the most dedicated analysts to maintain comprehensive awareness; the cognitive burden of constant alerts inevitably leads to fatigue and missed signals.
What we have is an essential imbalance in the security equation: Attackers must succeed just once, while defenders must be correct every time. When threat actors automate their attacks, as they are increasingly doing today, security teams composed primarily of humans find themselves in a never-ending process, always behind the threat and desperately trying to keep up through the analysis of artifacts left by the attackers.
AI Systems That Learn for Themselves
The most hopeful progress in contemporary safety operations derives from AI systems that learn for themselves and are, therefore, better at adapting to new threats. They’re not just the next step in the artificial intelligence revolution; they’re the future of cybersecurity. They work by transcending traditional methods, using DNA of sorts to detect malicious activity and unusual behavior.
In developing scalable security solutions, I’ve focused on building systems that leverage these capabilities to detect threats earlier in the attack chain. Effective AI security models continuously learn from new data, adapting to emerging attack vectors without requiring manual updates. They excel at recognizing subtle patterns across seemingly unrelated events and can anticipate potential attack paths before they’re fully executed.
Perhaps the most important thing is that these systems can respond within the confines of carefully defined parameters; they can contain threats in real time and escalate unusual incidents for human review. This balance of automation and oversight could be the future of effective security operations.
One of my most significant undertakings in data has been developing an AI-powered threat intelligence system that employs LLMs to not just automate but also enhance threat research. This approach does not necessitate the manual assessment of attack data or the application of detection algorithms, enabling us to reduce the investigation time from eight hours to just one. In real time, it produces actionable detection patterns, interrogates threat intelligence sources and pulls together vital indicators. That means fewer opportunities for bad actors to take advantage of any security oversights, since the good guys are now able to respond to incoming threats much more rapidly.
However, there are challenges. AI systems can inherit biases from their training data, which can result in overlooking threats that tend not to match the expected patterns. This is particularly dangerous in cybersecurity, where novel attacks may look nothing like historical examples. Real-time processing also presents a significant hurdle since effective security requires the analysis of massive data streams with minimal latency. Even minor delays can mean the difference between preventing an attack and responding to a breach.
My experience in automating security workflows and improving system observability has taught me that solving these problems requires several different tools and techniques. For one, AI that can explain itself and why it makes the decisions it does gets our security analysts to evaluate alert legitimacy much faster. For another, we really can’t do without an alerting structure that layers up in “progressively” computationally more intense ways with each passing layer. This helps us achieve a nice balance of speed versus thoroughness.
We used a multi-layer risk rating method to minimize the false positives we get, giving each alert a context-based score, which means that instead of treating each flagged event as equally important, we see them in the light of a holistic picture. This approach aligns with the NIST Cybersecurity Framework 2.0’s updated risk assessment methodology, which emphasizes contextual evaluation of threats based on both asset value and environmental variables. For instance, if someone is trying to break into a computer from a place with a history of suspicious behavior, that flagged event is much more significant than if someone is trying to break into a computer from a known, safe location. Also, suppose a person is trying to break into a laptop that has all sorts of security features (like a computer that is in a password-protected gated community). In that case, that attempt is much more significant, in our book, than if someone is trying to break into a computer that is sitting on a kitchen table.
The future won’t bring the question of whether AI will transform cybersecurity into view. Instead, it will bring the question of how completely this transformation will occur and who will have the advantage in maintaining transformed systems. According to the Cloud Security Alliance’s 2024 research, “Top Threats to Cloud Computing: The Pandemic Eleven,” organizations shifting to AI-powered security in cloud environments face a critical adaptation period where threats evolve 5x faster than in traditional infrastructure. It will largely be AI, not humans, that calls the shots in post-quantum defense. Whether this is a good or bad thing is another question altogether.
The security systems that will work best will be those combining AI with human brains. Security is a good area to be applied to because processing power and pattern recognition are what computers — or, in these cases, AI — do well. But when it comes to the decision-making that requires a bit more finesse, we’d rather have a human on duty. The CSA research further indicates that hybrid human-AI security operations showed 76% greater resilience against novel cloud-based threats than either fully automated or fully human approaches.
After all, we don’t pay humans just to do what computers can do. And when it comes to fending off future attacks against our systems, a human with creativity — artificially or otherwise — will be much better at understanding the psychology of our would-be attackers and imagining the kinds of scenarios they might come up with than any machine we could engineer.
In my work to boost security operations through the use of automation and AI-driven intelligence, I’ve come to see that the good defenses maintain this balance: using AI to help humans do what they do best (and a little better). In contrast, humans do what they must do (and do it better than any AI could). This is to scale the effectiveness of the security teams while allowing for the kind of creative thinking that’s necessary to stay ahead of the curve.
AI will eventually help defenders, but only if it is widely adopted. Right now, attackers can work faster because they have fewer ethical and compliance restrictions to worry about. However, when you look at the resources that defenders have, we should be the ones working at and operating under “a speed of trust.” Defenders have far greater access to data, computing capacity and opportunities for automation than attackers. The right way for defenders to go about this is to figure out how to incorporate AI in such a manner that it operates as a trusted system. Whether we can do that or not under our current resource constraints (and those are real) may be the number one argument for why we should be investing in AI and not just looking at it as something that can be used against us.