SBN

Securing AI Agents: A New Frontier in Cybersecurity

As RSA Conference 2025 just wrapped up, one thing’s clear: AI agents are everywhere—and apparently, they need security guards too.

These digital overachievers are working 24/7, managing networks, analyzing data, and getting things done while we’re all just trying to find a charger. But without proper security, these agents could accidentally leak sensitive information, misuse credentials, or even open the floodgates for hackers to exploit vulnerabilities.

While AI agents are revolutionizing industries, the cybersecurity world is scrambling to figure out how to protect these new digital workers, especially given their ability to operate autonomously. At the RSA Conference 2025, David Bradbury, Chief Security Officer at Okta, summed it up perfectly: “You can’t treat them like a human identity and think that multifactor authentication applies in the same way.”

As AI agents become an increasingly larger part of the workforce, the need for robust security measures has never been more pressing. According to Deloitte, 25% of companies using generative AI are expected to launch agentic AI pilots this year, with that number expected to rise to 50% by 2027. These statistics underscore the rapid expansion of AI’s role and the growing cybersecurity risks associated with it.

The Security Implications of Autonomous AI Agents

The rise of AI agents has already raised significant concerns about security. Without proper guardrails, these agents could inadvertently cause data breaches, misuse login credentials, or leak sensitive information, especially considering their ability to act independently and at speed. For many organizations, their security infrastructure was not built with AI agents in mind. The problem becomes even more complicated as machine identities continue to proliferate across enterprise environments.

CyberArk’s 2025 Identity Security Landscape report reveals that machine identities now outnumber human identities by more than 80 to 1, a stark reminder of just how quickly this shift is happening. As these agents take on more critical tasks, they require as much—if not more—security as human employees.

In fact, experts argue that AI agents need “elevated trust” to ensure they don’t pose a risk. While securing traditional machine-based identities like VPN gateways and file servers is already part of the cybersecurity landscape, AI agents are far more complex. As Jeff Shiner, CEO of 1Password, explains: “An agent acts and reasons, and as a result of that, you need to understand what it’s doing.”

A Call for Immediate Action: Securing AI Agents

As companies rapidly deploy AI agents, security vendors are scrambling to develop solutions that can help manage these new digital employees. At the RSA Conference, security providers such as 1Password, Okta, and OwnID introduced products designed to secure AI identities. These tools aim to provide the necessary protection for AI agents, ensuring that they can carry out their work without compromising an organization’s security.

Proactive security measures will be vital as AI agents take on more responsibility. 

The post Securing AI Agents: A New Frontier in Cybersecurity appeared first on Centraleyes.

*** This is a Security Bloggers Network syndicated blog from Centraleyes authored by Rebecca Kappel. Read the original post at: https://www.centraleyes.com/securing-ai-agents-a-new-frontier-in-cybersecurity/