How Offensive AI Can Disarm Cybersecurity

As more organizations adopt AI and ML as cybersecurity controls and to detect and deter attacks, cybercriminals are devising ways to use AI as the basis of attacks.

“What’s known as ‘offensive AI’ will enable cybercriminals to direct targeted attacks at unprecedented speed and scale while flying under the radar of traditional, rule-based detection tools,” MIT Technology Review stated, warning that organizations need to go on the defensive, and declaring “the battle of algorithms has begun.”

MIT Technology Review joined with Darktrace to conduct research on how to address present and future cybersecurity threats. Almost all of the nearly 300 C-level executives, directs and managers polled (96%) said they are preparing for AI-based cyberattacks, and 68% expect to see AI used to impersonate humans and launch spear phishing attacks.

“Approaches that are based on analyzing historical attacks will be ill-equipped to defend against offensive AI. A fundamentally new approach using self-learning technology and autonomous response will be necessary to augment human security teams,” Nicole Eagan, chief strategy & AI officer at Darktrace, said in a formal statement.

Manipulating AI to Be a Threat

Any system that can take input from users will be subject to manipulation, said John Bambenek, threat intelligence advisor at Netenrich, in an email comment.

“Whether that is facial recognition systems, social media postings or object recognition systems for self-driving cars, black hat SEO and social media amplification attacks are, at their core, attacks against machine-learning systems deployed by tech companies,” Bambenek added.

Because attackers are turning more to this “offensive AI,” and using techniques like brand impersonation to trick users into compromising themselves, cybersecurity teams must be more vigilant than ever. Awareness training around phishing is going to be especially important.

Manipulating Cybersecurity Tools

Researchers in Australia discovered they can fool AI-driven antivirus into thinking it accepted malware. It worked because the “machine-learning algorithm has been trained to favor a benign file, causing it to ignore malicious code if it sees strings from the benign file attached to a malicious file,” explained Silicon Angle.

“Basically, antivirus technology (AT) is about patterns, and in order for it to work well it needs to be trained on a decent size of similar patterns,” said Dirk Schrader, Global Vice President, Security Research at New Net Technologies, in an email interview.

When you combine the type of attack methods used by the Australian researchers with what happened in the SolarWinds case, you have a powerful combination able to ‘train’ an AI solution to not see a specific attack. This allows the attacker easy access to whatever should be protected by the AI solution.

Threat actors are also using AI for disinformation campaigns against industries like medical and national security. The fake cybersecurity information generated was so pitch-perfect, it fooled cybersecurity professionals.

“Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations,” explained Fast Company. These transformers are also used to spread false threat intelligence, and do so well enough to stump actual cyberthreat hunters. When this misinformation regarding cyberthreats is spread, it can force the security team to refocus its attention to fake risks, which leaves systems open to real harmful attacks that could have tragic results.

“Security teams using AI solutions should be aware of the original training patterns used, plus any ongoing updates to it. If the solution is connected to a central core processing the data received using AI methods, the necessary question to the vendor is how they prevent pattern training using crafted input,” said Schrader.

“Overall, AI will not replace ‘old-school’ security; it is a valid additional layer in a security architecture,” Schrader added. “The essentials of cybersecurity—know what assets you have, what the changes and vulnerabilities are, what your sensitive data is—these essentials will stay and can already be solved with capable non-AI solutions.”

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba

Secure Guardrails