Creator: Itamar Apelblat, CEO and Co-Founder, Token Safety
Not way back, AI deployments contained in the enterprise meant copilots drafting emails or summarizing paperwork. At the moment, AI brokers are provisioning infrastructure, answering buyer assist tickets, triaging alerts, approving transactions, writing manufacturing code, and a lot extra. They’re not passive assistants. They’re operators throughout the enterprise.
For CISOs, this shift creates a well-recognized however amplified drawback: entry.
Each AI agent authenticates to programs and companies. It makes use of API keys, OAuth tokens, cloud roles, or service accounts. It reads knowledge, writes configurations, and calls downstream instruments. In different phrases, it behaves precisely like an id, as a result of it’s one.
But in lots of organizations, AI brokers should not ruled as first-class identities. They inherit the privileges of their creators. They function below over-scoped service accounts. They’re granted broad entry simply to verify issues work. As soon as deployed, they usually evolve quicker than the controls round them.
That is the rising blind spot in AI safety.
Step one towards closing it’s what we name identity-first safety for AI: recognizing that each autonomous agent have to be ruled, audited, and attested similar to a human consumer or machine workload. Meaning distinctive identities, outlined roles, clear possession, lifecycle administration, entry management, and auditability.
However right here’s the onerous fact: id alone is not adequate.
Conventional id and entry administration (IAM) solutions a simple query: Who’s requesting entry? In a human-driven world, that was usually sufficient. Customers had roles and job features. Providers had outlined scopes. Workflows have been comparatively predictable.
AI brokers create, use, and rotate identities at machine pace—outpacing conventional IAM controls.
This information exhibits CISOs the way to handle the complete lifecycle of AI agent identities, scale back threat, and preserve governance and audit readiness.
AI brokers change that equation.
They’re dynamic by design. They interpret inputs, plan actions, and name instruments primarily based on context. An AI agent that begins with the mission to generate a quarterly report would possibly, if prompted or misdirected, try and entry programs unrelated to reporting. An infrastructure agent designed to remediate vulnerabilities would possibly pivot to modifying configurations in ways in which exceed its unique scope.
When that occurs, identity-based controls don’t essentially cease it from taking place.
Conventional IAM assumes determinism. A job is granted as a result of a consumer or service performs an outlined perform. The scope of motion is predictable.
AI brokers break that assumption. Their goal could also be fastened, however the path they take to realize it’s fluid. They motive, chain instruments collectively, and discover different actions.
Static roles have been by no means designed for actors that determine the way to act in actual time. If the agent’s function permits the motion, entry is granted, even when the motion not aligns with the rationale the agent was deployed within the first place.
That is the place intent-based permissioning turns into important.
If id solutions who, intent solutions why.
Intent-based permissions consider whether or not an agent’s declared mission and runtime context justify activating its privileges at that second. Entry is not only a static mapping between id and function. It turns into conditional on objective.
Think about an AI agent liable for deploying code. In a conventional mannequin, it could have standing permissions to switch infrastructure. In an intent-aware mannequin, these privileges activate solely when the deployment is tied to an authorised pipeline occasion and alter request. If the identical agent makes an attempt to switch manufacturing programs exterior that context, the privileges don’t activate that entry.
The id hasn’t modified, however the intent, and due to this fact the authorization, has.
This mixture addresses two of the most typical failure modes we’re seeing in AI deployments.
First, privilege inheritance. Builders usually check brokers utilizing their very own elevated credentials. These privileges persist in manufacturing environments, creating pointless publicity. Treating brokers as distinct identities may help remove this bleed-through.
Second, mission drift. AI brokers can pivot mid-run primarily based on prompts, integrations, or adversarial enter. Intent-based controls stop that pivot from turning into unauthorized entry.
For CISOs, the worth isn’t simply tighter management. It’s governance that scales.
AI brokers work together with hundreds of APIs, SaaS platforms, and cloud sources. Attempting to handle threat by enumerating each permissible motion rapidly turns into unmanageable. Coverage sprawl will increase complexity, and complexity erodes assurance.
An intent-based mannequin simplifies oversight. Governance shifts from managing hundreds of discrete motion guidelines to managing outlined id profiles and authorised intent boundaries.
Coverage critiques give attention to whether or not an agent’s mission is suitable, not whether or not each particular person API name is accounted for in isolation.
Audit trails turn into extra significant as nicely. When an incident happens, safety groups can decide not solely which agent carried out an motion, however what intent profile was lively and whether or not the motion aligned with its authorised mission.
That stage of traceability is more and more important for regulatory scrutiny and board-level accountability.
The broader challenge is that this: AI brokers are accelerating quicker than conventional entry management fashions have been designed to deal with. They function at machine pace, adapt to context, and orchestrate throughout programs in ways in which blur the strains between software, consumer, and automation.
CISOs can’t afford to deal with them as simply one other workload.
The shift to agentic AI programs requires a shift in safety considering. Each AI agent have to be handled as an accountable id. And that id have to be constrained not solely by static roles, however by declared objective and operational context.
The trail ahead is evident. Stock your AI brokers. Assign them distinctive, lifecycle-managed identities. Outline and doc their authorised missions. And implement controls that activate privileges solely when id, intent, and context align.
Autonomy with out governance is an enormous threat. Id with out intent is incomplete.
Within the agentic period, understanding who’s appearing is important. Making certain they’re appearing for the proper motive is what makes agentic AI safe.
In case you’re securing agentic AI we’d love to point out you a technical demo of Token and listen to extra about what you’re engaged on.
Sponsored and written by Token Safety.
