Use of Defensive AI Against Cyberattacks Grows

Security leaders are increasingly turning to AI and ML-based defenses against cyberattacks as pessimism grows over the efficacy of human-based cybersecurity defense efforts.

A recent survey from MIT Technology Review Insights, sponsored by Darktrace, found more than half of business leaders think security strategies based on human-led responses to fast-moving attacks are failing; nearly all have begun to bolster their defenses in preparation for AI-enabled attacks.

Embracing AI as a Force Multiplier

“Cyber AI autonomously stops threats in their tracks and surfaces relevant information in a digestible narrative, augmenting human teams and giving them time to focus on strategic tasks that matter,” said Darktrace’s director of threat hunting, Max Heinemeyer. “All that organizations can do to prepare is simply embrace self-learning AI as a force multiplier.”

He noted that AI-powered cybersecurity platforms can integrate with other tools in a security toolbox, ingest new forms of telemetry from existing investments for further enrichment, share detections and incidents with workflow tools and even orchestrate response actions across the rest of the digital estate, for example, by integrating with preventative tools.

“I expect that AI-based solutions will grow exponentially as we continue to see an inevitable uptick in sophisticated attacks,” Heinemeyer said, noting 97% of the approximately 300 executives surveyed are concerned by the prospect of AI-enabled attacks.

“With defensive AI critical to fighting back, the security industry is realizing that it needs to embrace autonomous security to stay one step ahead of the bad guys,” he said.

The Right Cybersecurity Tools

Dr. Sohrob Kazerounian, AI research lead at Vectra, said AI-based security defenses are the right tool for modern network defenders, not because these threats will become some dominant force, but because they are transformative in their own right.

Contrary to flashy Hollywood headlines about some Skynet-like AI hacker coming to get you, actual human attackers are far more clever than any contemporary offensive AI systems. This is, in part, because AI systems conform to a series of “rules” and, as every human hacker knows, rules are made to be broken.

“The most likely scenario is that some AI techniques merely make it into the toolkit of human adversaries, such as incorporating natural language AI into large scale phishing attacks,” explained Kazerounian. “We shouldn’t downplay the impact of a good phishing campaign, but if this is the sum total of what your C-suite is preparing for, you have your work cut out for you.”

He said decisions about which AI cybersecurity solution to commit to should be driven based on outcome-based evaluations. This means selecting functional, rather than purely ornamental, solutions.

“Does a particular AI-based solution shoehorn every problem into a single AI technique? Or does the solution treat each problem as something to be solved using the best approach for the task at hand? Does a cybersecurity solution actually move your organization closer to its goals? Does it do so in a cost-effective and efficient manner?” he asked.

Alternatively, given the massive amounts of network traffic any SOC operation has to deal with, Kazerounian said it’s important to consider whether the solution you are testing actually creates more work by virtue of false positives, or incomplete prioritization of actual risks to your organization.

Kazerounian suggests that before committing, organizations need to have a grasp for what their problems are today – which often relate to ineffective visibility, exhausting volumes of alerts and the increasing need to respond to the speed and scale of adversaries operating in the cloud.

“When you can zero-in on these problems, you’ll be in a position to evaluate which AI solutions will offer solutions, and which ones are just stuffed with AI hype and pixie dust,” he said.

Snehal Antani, co-founder and CEO at Horizon3.AI, said thanks to open source attack tools, stolen compute resources and automation, the cost to attack is far cheaper than the cost to defend.

“We must assume that every cyberattack over the past 10 years has generated training data from which attack algorithms can be developed and tested,” he said. “These attack algorithms, which employ machine learning and AI, enable ransomware, APTs and other threat actors to efficiently discover, evade and succeed at attacking their targets.”

In short – never before has the economics of cybersecurity been so imbalanced in favor of bad actors.

Shifting the Balance of Power With AI

Antani noted there has been “moderate success” in the application of machine learning and AI for user behavior analytics and other emerging defensive techniques.

“As an industry, we need to accept that humans are the inefficiency in cyber defense and double down on algorithmic cyber warfare,” he said. “We must quickly shift from ‘humans-in-the-loop’ to ‘humans-on-the-loop’, with a vision for ‘humans-out-of-the-loop’,” he said.

He explained that human-based penetration testing, human-powered security operations centers, and human-based threat hunting must be the exception in order to keep pace with adversaries.

Darktrace’s Heinemeyer agrees, noting that the challenge of cybersecurity is no longer a human-scale problem.

“The application of AI to cyberattacks will render these threats faster and more furious than ever, and it is no longer enough to simply throw more humans into the mix,” he said. “To mitigate the threat of AI-powered attacks, we must fight fire with fire. Only AI itself can keep pace with AI.”

Nathan Eddy

Nathan Eddy is a Berlin-based filmmaker and freelance journalist specializing in enterprise IT and security issues, health care IT and architecture.

nathan-eddy has 243 posts and counting.See all posts by nathan-eddy