Symantec Uses OpenAI Operator to Show Rising Threat of AI Agents
AI agents are the latest rage in the fast-moving AI field, offering organizations the promise of AI-based workers that can address and solve complex problems with little to no human intervention, driving cost and time efficiencies.
That said, it’s those same capabilities – such as autonomously planning and executing multi-step processes, using online tools to complete takes, collaborating with other agents, analyze trends, learning from their experiences, and adapting to changing situations – that make them a looming security risk.
“While agents may ultimately enhance productivity, they also present new avenues for attackers to exploit,” Researchers with Symantec’s Threat Hunter Team wrote in a report this week. “The technology is still in its infancy, and the malicious tasks it can perform are still relatively straightforward compared to what may be done by a skilled attacker. However, the pace of advancements in this field means it may not be long before agents become a lot more powerful.”
Phishing with AI Agents
The researchers illustrated this by using OpenAI’s Operator agent, which the generative AI giant launched as a research preview in January, to plan and run a phishing attack via email with little human intervention. That included identifying a target in a specific role – in this case, someone at Broadcom, Symantec’s parent company – finding their email address, creating a PowerShell script that would gather information on the victim’s system, and email it to them in a message that included a lure to convince them to open the malicious file.
It took a few tweaks, but they developed the right prompt to enable the agent to create a message that could bypass the company’s restriction against sending or opening unsolicited emails.
Operator did all this, including finding the target’s email address even though it isn’t publicly available (the agent used deduction by analyzing other Broadcom email addresses), created the PowerShell script (including finding and installing a text editor plugin for Google Drive and after visiting several web pages about PowerShell to see how it could be done), and writing what the researchers said was “a reasonably convincing email” with minimal guidance urging the target to run the script.
“Although we told Operator we had been authorized to send the email, it required no proof of authorization and sent the email even though ‘Eric Hogan’ is a fictitious person,” they wrote.
The Threat Will Grow
The demonstration shows what bad actors can do with AI agents now, and that their capabilities will only grow as AI agents become more powerful. Threat groups already are using large language models (LLMs), but they’re passive and can only assist in in such tasks as creating phishing materials or writing code.
With AI agents, “it is easy to imagine a scenario where an attacker could simply instruct one to ‘breach Acme Corp’ and the agent will determine the optimal steps before carrying them out,” the researchers wrote. “This could include writing and compiling executables, setting up command-and-control infrastructure, and maintaining active, multi-day persistence on the targeted network.”
An already low barrier to entry for cybercriminals would drop even further.
Enterprises Embrace Agents
The rush to adopt AI agents is underway. Market research firm Statista is predicting that the global market value for agentic AI will grow from $5.1 billion last year to $47.1 billion in 2030. In a survey last year of more than 1,300 professionals, LangChain, which offers a framework for building LLMs, found that 51.1% of respondents said they had agents in production and 78.1% said their companies were developing agents with plant to them into production.
“We also continue to see companies moving beyond simple chat-based implementations into more advanced frameworks that emphasize multi-agent collaboration and more autonomous capabilities,” the report’s authors wrote.
“We are beginning an evolution from knowledge-based, gen-AI-powered tools … to gen AI–enabled ‘agents’ that use foundation models to execute complex, multistep workflows across a digital world,” analysts with global consultancy McKinsey and Co. wrote last year. “In short, the technology is moving from thought to action.”
Prepare for the Future Now
Analysts with The Futurum Group reiterated the Symantec researchers’ premise that AI agents open another door for bad actors.
“Agents are an extremely attractive attack surface, particularly during these early stages of innovation and adoption where security protections have yet to be implemented,” Mitch Ashley, vice president and practice lead of DevOps and AppDev at The Futurum Group, told Security Boulevard. “We’ll see many videos and articles of people demonstrating how agents can be exploited, which will, in part, bring this issue further into the light.”
Futurum Group Research Director Krista Case told Security Boulevard that the threat exposure AI agents create are found in two keys areas, the first being that the agents will take rogue actions.
“They are autonomous, which means that safeguards must be in place to oversee and track their actions to ensure they are not malicious,” Case said.
The second is that they increase the potential attack surface. Beyond possible rogue actions, there also could be misconfigurations and other vulnerabilities.
“So, in addition to a host of potential adversarial attacks such as data poisoning and behavioral manipulation, there is the possibility that sensitive data could be exposed,” she said. “While the usage of agentic AI is still in early days, it is growing, and it is something that organizations need to prepare for from a security perspective.”