Wallarm Extends API Security Reach to AI Agents
Wallarm at the 2025 RSA Conference announced that, starting this summer, it will extend the reach of its platform for securing application programming interfaces (APIs) to include artificial intelligence (AI) agents.
Tim Erlin, vice president of product for Wallarm, said the Agentic AI Protection capability added to the platform makes it possible to thwart attack vectors involving prompt injections, jailbreaks, system prompt retrieval and agent logic abuse. It provides that capability by applying behavioral and semantic analysis to identify suspicious patterns in both incoming queries and outgoing responses.
Every AI agent deployed relies on an API to access data. Rather than requiring cybersecurity teams to build and deploy a separate platform to secure AI agents, Wallarm is making a case for extending an API security platform that makes use of machine learning algorithms to now also discover, monitor, analyze, and block attacks against AI agents, said Erlin.
An analysis of data collected by Wallarm suggests cybersecurity issues involving AI agents are already being encountered, with 65% involving APIs. A quarter of those issues remain open, with organizations taking on average 42 days to resolve them. More than 700 issues have not been addressed at all.
While it’s still early so far as adoption of AI agents is concerned, they will present a tempting target. AI agents will soon be embedded across a wide range of workflows and applications. Compromising any one of them will enable cybercriminals, for example, to potentially reroute customer service requests.
In general, attacks against AI models and agents fall broadly into four categories. Data poisoning involves injecting malicious or misleading data into the datasets used to train the model as part of an effort to ensure suboptimal outputs. Backdoor attacks, meanwhile, hide triggers within the model that activate malicious behavior when specific conditions are met.
The other two types of attacks involve data security issues where, for example, data is exfiltrated via an API, and attacks involving attempts to convince the AI model or agent to perform a malicious task.
Unfortunately, few organizations have implemented rigorous AI safety and cybersecurity protocols. In the absence of any formal policy, many end users are sharing sensitive data with providers of AI models with little to no appreciation for how that data is being used or might need to be secured. Some organizations are banning usage of AI models with no ability to enforce those policies but many end users are simply treating AI models as one more shadow IT service to be surreptitiously employed as they see fit.
Hopefully, more organizations in consultation with their cybersecurity teams will soon be implementing the controls needed to put some teeth into the policies they define. Cybercriminals are clearly honing their prompt engineering skills to circumvent AI guardrails. Eventually, it’s not a matter of if there will be major cybersecurity incidents involving AI so much as it is when and to what degree. The only thing not nearly as certain, however, is how organizations will respond to those breaches when they soon inevitably occur.