The Era of AI-Based Cyberattacks is Well Underway, Darktrace Says
Cybercriminals’ rapid adoption of generative AI tools in the wake of the release of systems like OpenAI’s ChatGPT and Google’s Bard already is expanding their capabilities to run malicious campaigns, according to cybersecurity firm Darktrace.
And as threat actors arm themselves with these technologies, it will be incumbent on security professionals to use those same tools to push back against the growing AI-fueled threats, Jack Stockdale, founding CTO at Darktrace, wrote in a blog post today.
“In cyber security, AI is a double-edged sword,” Stockdale wrote. “Its use by cyber-attackers is still in its infancy, but Darktrace expects that the mass availability of generative AI tools like ChatGPT will significantly enhance attackers’ capabilities by providing better tools to generate and automate human-like attacks.”
Darktrace and other cybersecurity firms have been tracking how threat groups have been using generative AI to develop new tools to launch phishing attacks and develop code. As an example, SlashNext in July pointed to WormGPT, which is sold on the dark web to enable bad actors to launch business email compromise (BEC) campaigns.
Darktrace is beginning to see indicators in three areas where AI is enhancing what cybercriminals can do, including making it easier for low-level threat actors to launch more sophisticated campaigns.
The company in April said there was a 135% increase in what Stockdale called “novel social engineering attacks” – or “email attacks that show a strong linguistic deviation from other phishing emails” – from January to February, corresponding with the broad adoption of ChatGPT, which was released in late November 2022.
The sophisticated linguistic techniques used in these attacks include an increase in text, punctuation, and sentence length, Darktrace reported in April. In additions, there was a drop in the number of malicious emails that contained links or attachments for victims to click on.
All of this “suggests the use of generative AI tools is providing an avenue for threat actors to craft more sophisticated and targeted attacks, at speed and scale,” Stockdale wrote.
More Automation Means More Speed
AI also is enabling bad actors to launch attacks much more quickly. Darktrace’s Cyber AI Research Centre between May and July saw a 59% increase among its customers of multistage payload attacks, where a phishing email entices the victim to take a series of steps before delivering a malicious payload or harvesting sensitive information.
Almost 50,000 more of these attacks were detected in July than May, suggesting the use of automation in launching them. Stockdale said he expects such attacks will increase in speed as threat groups use more automation and AI in their efforts.
During those same months, researchers saw an increasing change in the methods used by attackers who impersonate trusted people in their phishing emails. There was an 11% decline in the amount of “VIP impersonation” – where the phishing emails mimic senior executives – but the number of email account takeover attempts jumped 52%.
At the same time, the impersonation of an organization’s internal IT team in these fake emails grew 19%, according to Stockdale.
“The changes suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks,” he wrote. “While it’s common for attackers to pivot and adjust their techniques as efficacy declines, generative AI – particularly deepfakes – has the potential to disrupt this pattern in favor of attackers.”
Deep fakes are convincingly realistic fake video and audio creations that leverage machine learning techniques. With AI, hackers can use increasingly sophisticated linguistics and highly realistic voices in deep fakes to deceive employees.
This is Just the Beginning
These are the opening salvos from criminals just starting to use the new AI techniques, which are enabling novice threat actors to up their games. The more sophisticated AI attacks initially will come from nation-states, driving speed and scale more than generating new attack methods, Nicole Carignan, vice president of strategic cyber AI, told Security Boulevard.
“In the longer term, we can expect offensive AI to be used throughout the attack life cycle – be it to use natural language processing or large language models to understand written language and to craft contextualized spear-phishing emails at scale or image classification to speed up the exfiltration of sensitive documents once an environment is compromised and the attackers are on the hunt for material they can profit from,” Carignan said. “AI will make it possible for machines to deploy unique attacks at scale – always on, continuously morphing at machine speed.”
Using AI to Fight AI
Such capabilities will need to be match by defenders using AI, she said.
“The need for defenders to be everywhere, all at once, has pushed the adoption of AI in security,” Carignan said. “The volume and sophistication of threats has grown exponentially in recent years, making it extremely difficult for human security teams to monitor, detect, and react to every threat or attempted attack.”
The complexity of today’s systems means thousands of micro-decisions need to be every day to match the spontaneous and erratic behavior of hackers and to spot and contain threats, which is challenging given the increasing speed and scale of what attackers can do, she said.
“This has become a job for AI,” she said. “AI can perform thousands of calculations in real time to detect suspicious behavior and perform the micro decision-making necessary to respond to and contain malicious behavior in seconds. … Adoption will need to increase in the future as novel threats become the new normal.”
SlashNext CEO Patrick Harr agreed, telling Security Boulevard that “generative AI is a game-changer for cybercriminals, who can use it develop, disseminate and modify attacks very quickly, but it has also improved security efficacy in organizations, too. With the increase in sophistication and volume of threats attacking organizations on all devices, generative AI-based security provides organizations with a fighting chance at stopping these breaches.”