Agentic AI Enhances Enterprise Automation: Without Adaptive Security, its Autonomy Risks Expanding Attack Surfaces
The rise of agentic AI is accelerating, with major tech companies racing to develop autonomous AI systems. OpenAI recently introduced Operator, showcasing the potential of AI agents, though its current capabilities remain limited. Google has launched Agentspace, a platform designed to build AI-driven business automation. Microsoft, Amazon, Oracle and other tech giants are also investing heavily in AI agents, reflecting a broader industry shift toward intelligent automation. Deloitte predicts enterprise adoption of AI Agents will jump from 25% in 2025 to 50% by 2027, underscoring the urgency for organizations to adapt. But as enterprises embrace AI autonomy, a critical question looms – how well is security keeping up?
Perhaps it goes without saying, but all advancements in AI need to be accompanied by cybersecurity measures equipped to protect them. That’s all the more true when taking on a major leap forward like agentic AI. Agentic AI is incredibly sophisticated by design, and capable of solving complex problems on its own through iterative reasoning and planning.
In practice, an AI agent does far more than, for instance, detect anomalous patterns in data and flag it for an external system to fix. An AI agent would be able to inspect the anomaly and identify it as a potential threat by itself, and then recommend the correct action based on an established playbook. This is autonomy at the highest level, and that creates a new paradigm within AI cybersecurity.
AI Hesitancy and its Discontents
All this would perhaps sound more promising if enterprises were already keeping pace with AI security and adopting AI tools with confidence. The opposite is true. According to a 2024 Salesforce report, just 11% of CIOs have fully implemented AI technology. The most common obstacle to adoption cited by CIOs surveyed was security or privacy threats.
To be clear, these CIOs’ concerns are not unfounded. Nor is it likely that they’ve missed the Deepseek-related rush of the bracing new threat of AI models potentially developing completely inscrutable languages and reasoning that would circumvent and negate human guardrails. If they feel their enterprise security is ill-equipped to handle agentic AI, they’re probably right. This is a new paradigm by any measure.
Software agents themselves are not new. What’s new is the speed and scope at which modern AI models can create use-case-specific agents to flexibly operate within different scenarios and the broad attack surface that new speed and scope create. The central cybersecurity question is how to secure and control autonomous agents. Agentic AI must be prevented from accessing untrusted data sources, and known combinations of tools and instructing other agents to conduct activities on their behalf
AI Independence, Within Reason
AI agents are not self-protecting. Like any other type of AI, these agents can be socially engineered, which is to say subjected to instruction injection or designed to be malicious by default.
Enterprises evaluating AI security solutions should consider four key criteria:
(1) Autonomy control. As agentic AI solutions operate with varying degrees of independence, organizations need fine-grained control over how much decision-making power an AI agent has. This includes defining guardrails around trusted data sources, restricting tool access and ensuring human oversight in critical operations.
(2) Attack surface management. AI agents introduce a new security paradigm where the speed and complexity of automation expand the attack surface. Organizations must ensure that their AI security solutions can detect and mitigate threats such as instruction injection, adversarial manipulation, or unauthorized agent-to-agent interactions.
(3) Adaptability. Unlike traditional security tools, AI-driven security solutions need to be dynamic and continuously evolve. The ability to learn from new threats, integrate real-time threat intelligence and update playbooks autonomously is crucial to staying ahead of adversaries.
(4) Integration and interoperability with existing security frameworks. AI security solutions must seamlessly fit within an enterprise’s broader security stack, including SIEM, SOAR and existing monitoring frameworks. The best solutions not only detect and analyze threats but also operationalize response workflows, ensuring that AI-powered security enhances rather than complicates enterprise defense strategies.
In practice, this approach requires multiple tactics. All agentic AI solutions operate with some degree of independence, making it essential for organizations to have fundamental and detailed control over the extent of its decision-making power. A new, bigger attack surface requires smarter AI security solutions. These solutions must be as smart as the agentic AI they protect, continuously learn from new threats, integrate real-time threat intelligence and update playbooks autonomously. Staying ahead of threats is a continuous, real-time process that can’t depend on after-the-fact adjustments.
Totally Secure or Nothing
Enterprises that aren’t prepared to simultaneously integrate every aspect of the four-pronged agentic AI cybersecurity strategy aren’t prepared for agentic AI at all. Agentic AI will only support enterprise security if it is itself protected. The security that agentic AI provides, primarily by automatically identifying threats and autonomously creating playbooks for other AI agents later on, is enormously beneficial to enterprises – but only if they’re ready to protect it (and themselves) accordingly.