SBN

The growing chaos of AI agents: Why your identity architecture is headed for trouble

AI agents are becoming central to how work gets done — from handling customer service chats to triggering infrastructure automation. But while the hype around agentic AI is reaching a fever pitch, most enterprises are already encountering a less glamorous reality:

Their identity infrastructure can’t handle the agents.

We’re seeing early signs of a pattern that mirrors the rise of the cloud a decade ago: innovation moving faster than governance. And this time, the problem is worse, because AI agents aren’t just accessing your systems.

They’re acting in them.

Here are the specific, systemic problems that traditional identity systems weren’t built to solve — and why they’re now slowing down secure enterprise adoption of agent-based AI.

 

No system of record for agents

Human users get profiles in identity providers. Machines get service accounts. But where do agents go?

In most environments today:

  • Agents aren’t enrolled in an IDP
  • Their credentials are hardcoded or shared
  • There’s no audit trail of what the agent did, why, or on whose behalf

Without an authoritative registry for agent identities, agents become untracked actors in your environment — able to perform actions with no attribution, scope boundaries, or accountability.

 

You can’t discover or organize AI agents at runtime

Agents today are:

  • Ephemeral (spin up and down constantly)
  • Decentralized (run on CI/CD, APIs, LLMs, edge systems)
  • Invisible (don’t appear in IDP logs or IAM dashboards)

Security and IAM teams often don’t know:

  • How many agents are active
  • Which APIs they are calling
  • Whether they’re still authorized

Without a dynamic discovery mechanism, most security teams are flying blind.

 

Agent permissions are wildly over-scoped

In the absence of scoped OAuth tokens or agent-level policies, agents are typically granted:

  • Admin-level API keys
  • Broad user credentials
  • Blanket access to services

This makes agents a prime target for abuse or exploitation, especially in systems where Zero Trust enforcement doesn’t extend to non-human actors.

Because there is no central policy or registry, these risky permissions are difficult to detect or revoke.

 

No federation or policy portability across agent platforms

Today’s AI agents are deployed on:

  • Azure (e.g., ChatGPT API integrations)
  • AWS (e.g., LangChain, serverless pipelines)
  • GCP (e.g., vertex agents)
  • On-prem platforms (e.g., CrewAI, custom LLM services)

Each environment handles agent identity differently, if at all. There’s:

  • No consistent way to authenticate or authorize agents
  • No shared policy model
  • No common scopes or access definitions

This lack of standardization means agent identity policy becomes fragmented and brittle, especially across hybrid and multi-cloud environments.

 

No way to tie agent behavior back to humans

When an agent performs an action — say, purchases tickets, deletes a resource, or files an expense — who is responsible?

Most systems today can’t answer:

  • Which user delegated the agent
  • What intent was behind the action
  • Whether the agent stayed within the authorized context

That’s a compliance and audit nightmare, especially for regulated industries where attribution is critical.

 

Shadow agents are emerging — and they’re dangerous

Security teams are already seeing signs of “shadow agents”:

  • Scripts written by devs that act like agents but never go through IAM review
  • Low-code automations that operate via unsecured webhooks
  • External LLMs tied to production services with no scoped authorization

These agents are:

  • Not enrolled in any identity fabric
  • Not tracked by any registry
  • Not constrained by any policy

And they represent one of the fastest-growing sources of ungoverned access in modern enterprise environments.

 

There’s no layer to coordinate human, app, and agent identity

Identity teams today have to choose between:

  • Extending human-centric IAM tools to agents (which doesn’t scale)
  • Rebuilding identity governance per agent platform (which isn’t sustainable)

What’s missing is a unified abstraction layer that ties together:

  • Agent identity (via dynamic registration, scopes, TTLs)
  • App identity (via access enforcement, audit)
  • Human identity (via delegation, context, approval)

Without this layer, enterprises are forced into fragile, fragmented IAM architectures that can’t handle runtime identity for AI-driven workflows.

 

The bottom line

AI agents are here — and they’re multiplying fast.

But our identity systems were built for humans, not autonomous actors. Until we adapt, we’ll continue to see:

  • Agents operating without authentication
  • Scopes that can’t be verified
  • Identities that can’t be traced
  • Actions that can’t be audited

Purple digital background with white text reading, "Ready to test-drive the future of identity for AI agents?" and a button labeled "Join the preview" invites you to explore groundbreaking solutions for AI agents.

The post The growing chaos of AI agents: Why your identity architecture is headed for trouble appeared first on Strata.io.

*** This is a Security Bloggers Network syndicated blog from Strata.io authored by Eric Olden. Read the original post at: https://www.strata.io/blog/product-engineering/growing-chaos-ai-agents-2a/