Beware the Rise of the Autonomous Cyber Attacker
AI’s growing sophistication signals a future in which networks can be compromised autonomously, and the industry must prepare for this near-term reality.
Mainstream AI tools including ChatGPT, Gemini and Llama, along with specific “red team” tools like WhiteRabbitNeo, effectively model data, summarize it and make predictions. What’s currently lacking is their ability to reason or perform anything more than simple tasks for their operators (part of the current human-in-the-loop, or HITL, model).
That’s about to change with new AI innovations that promise to outperform humans’ cognitive abilities, including general reasoning and agents. Both will inevitably disrupt our AI use. In fact, AI agents are already here, though they remain highly specialized and still work best with clear parameters. As research progresses, we can also expect more adaptive, context-aware agents capable of problem-solving – and eventually true general intelligence.
Here’s what this means for black hats, white hats and the industry at large.
What We’ve Witnessed
Generally, AI has accelerated at warp speed. And while the cybersecurity industry has embedded AI into areas like threat detection for years now, newer capabilities are enhancing practitioners’ workflows.
Task-specific AI functionalities are being deployed more frequently to process large datasets and identify and mitigate threats faster than ever. What the early use has shown is that AI can effectively detect complex patterns and quickly flag anomalies. Overall, the technology is addressing increasingly complex challenges with oversight from professionals.
Now, the question becomes – when is the human cut out of the loop?
AI ‘IRL’
Research has already shown that teams of AIs working together can find and exploit zero-day vulnerabilities. A team at the University of Illinois Urbana-Champaign created a “task force” of AI agents that worked as a supervised unit and effectively exploited vulnerabilities they had no prior knowledge of.
In a recent report, OpenAI also cited three threat actors that used ChatGPT to discover vulnerabilities, research targets, write and debug malware and setup command and control infrastructure. The company said the activity offered these groups “limited, incremental (new) capabilities” to carry out malicious cyber tasks.
Expect these capabilities to truly “enter the wild” – shaping the next biggest threat in cybersecurity: the autonomous attacker.
As with the task force, this will likely begin as a red team, academic exercise, or a capability in the hands of well-resourced agencies. But, it will then trickle down to cybercriminals as the technology gets cheaper and easier to use.
How Will This Look?
Once integrated, this advanced AI will increase the potency of cyberattacks. With little effort, motivated, nation-state actors may simply prompt their AI with the following: “Continuously monitor the internet looking for computer systems that belong to our adversary. When you find one, determine the best way to breach it without being detected. Then, establish a backdoor, set up monitoring and learn from your mistakes to improve attacks over time.”
As such, well-funded cybercriminals will use AI agents to simultaneously infect multiple targets with ransomware, or even run and fine-tune malvertising campaigns.
This turns AI from a tool heavily dependent on inputs to a highly sophisticated platform with expertise akin to, or exceeding, human professionals. It will plan out tasks, interact with the world and solve the problems it encounters.
Reining This In
“Darker” AI use has, in part, prompted many of today’s top thinkers to support regulations. This year, OpenAI CEO Sam Altman said: “I’m not interested in the killer robots walking on the street…[and] things going wrong. I’m much more interested in the very subtle societal misalignments, where we just have these systems out in society and through no particular ill intention, things go horribly wrong.”
He’s advocated for an international body – comparable to one that governs atomic energy – to monitor AI.
Theoretically, regulation may reduce unintended or dangerous use among legitimate users, but I’m certain that the criminal economy will appropriate this technology. As CISOs deploy AI more broadly, attackers’ abilities will concurrently soar.
To counter this, we must stay on top of security workloads, minimize our attack surfaces, monitor alerts continuously and automate patching wherever possible.
If we don’t, adversaries could gain the upper hand in the AI arms race.
United in our Defense
With AI innovation, we may see half a century’s worth of advancement in a single decade.
In cyber defense, AI will become more surgical in identifying zero-days, automating responses and thus protecting critical/corporate infrastructure. Still, as many top minds have cautioned, it poses significant risks – with bad actors potentially countering with equal force.
Organizations, industry groups and other stakeholders must unify and innovate together – to stay united in the fight against illicit AI, particularly this lightning-fast exploitation process.