On March 2, 2026, Pieter Kasselman (Defakto Security), Jean-François Lombardo (AWS), Yaroslav Rosomakho (Zscaler), and Brian Campbell (Ping Identity) published draft-klrc-aiagent-auth-00: a framework for AI agent authentication and authorization. It's the first serious attempt to apply existing identity standards to agentic AI. And the most interesting thing about it is what it chooses not to invent.
The core thesis: agents are workloads
The draft opens with a deceptively simple claim: an AI agent is a workload. Not a user. Not a service account. Not a new category that requires new protocols. A workload, in the same sense that a Kubernetes pod or a serverless function is a workload.
This matters enormously. The immediate consequence is that decades of identity and authorization standards already apply. SPIFFE for workload identity. OAuth 2.0 for delegated authorization. WIMSE for cross-system workload credentials. The draft argues, convincingly, that the industry doesn't need new protocols for agent auth. It needs to compose existing ones correctly.
I think this is exactly right, and it's a direct counter to the flood of startups building proprietary "agent identity" solutions from scratch. If your agent identity system isn't built on SPIFFE or an equivalent workload identity framework, you're reinventing a wheel that was already round.
What the framework covers
The draft defines an "Agent Identity Management System" (AIMS) as a stack of capabilities. Reading bottom to top:
- Identifier. Every agent gets a WIMSE/SPIFFE URI:
spiffe://trust-domain/path. One agent, one stable identifier. This is the foundation everything else builds on. - Credentials. X.509 certificates or JWT-based Workload Identity Tokens (WITs), cryptographically bound to the identifier. Short-lived. Automatically rotated.
- Attestation. How you prove the agent is what it claims to be. Hardware TEE measurements, platform signals, orchestration metadata. The identity-proofing step.
- Provisioning. Runtime credential issuance and rotation. No static secrets. No manual key management.
- Authentication. mTLS at the transport layer, WIMSE Proof Tokens or HTTP Message Signatures at the application layer.
- Authorization. OAuth 2.0 for delegation. Transaction Tokens for scope reduction within call chains.
- Monitoring & Remediation. OpenID Shared Signals Framework for real-time security events. Tamper-evident audit logs.
- Policy. Deployment-specific. Explicitly out of scope for standardization.
- Compliance. Also out of scope. Assessed by auditing behavior against policy.
It's a remarkably well-structured document. Each layer depends on the one below it, and the draft is honest about where it stops: policy format and compliance criteria are left to implementers.
What it gets right
Static API keys are an anti-pattern
The draft says it plainly: "Static API keys are an antipattern for agent identity. They are bearer artifacts that are not cryptographically bound, do not convey identity, are typically long-lived and are operationally difficult to rotate."
This needed to be said in an IETF document. Right now, the vast majority of AI agent deployments authenticate to LLM providers and tools using static API keys stored in environment variables. Every framework tutorial starts with OPENAI_API_KEY=sk-.... The draft correctly identifies this as fundamentally inadequate for production agent systems.
An API key tells you which account is paying for the request. It does not tell you which agent made the request, what authority that agent has, or whether it should be allowed to do what it's asking. Those are different problems, and conflating them is how you end up with agents that can do anything the API key allows, which is typically everything.
Transaction Tokens for scope reduction
Section 10.4 on Transaction Tokens is where the draft gets genuinely interesting from a security perspective. The problem: when an agent calls a tool, and that tool is itself composed of microservices, the access token gets forwarded through the internal call chain. If any of those internal services is compromised, the attacker gets a broadly-scoped access token.
The solution: exchange the access token for a Transaction Token that is bound to a specific transaction, includes context (caller IP, transaction parameters), and is short-lived. The Transaction Token can't be used for a different transaction or with modified parameters. This is a meaningful security improvement over the "pass the bearer token around" pattern that most agent systems use today.
Human-in-the-loop as authorization, not UI
Section 10.6 makes a subtle but important distinction. When an agent pauses and asks a user "should I proceed?", that is not authorization. It's a UI interaction. Real authorization requires a verifiable grant from an authorization server. The draft proposes using CIBA (Client-Initiated Backchannel Authentication) to turn user confirmation into an actual OAuth authorization event.
This is a direct response to how frameworks like MCP handle tool approval today: the user clicks "allow" in a chat interface, and the agent proceeds. There's no cryptographic proof that the user approved, no audit trail linking the approval to the action, and no way to verify after the fact that the approval was genuine. The draft says agents "MUST NOT treat local UI confirmation alone as sufficient authorization." That's the right call.
What it doesn't cover
The draft is an authentication and authorization framework. It is not a governance framework. This is an important distinction that the document itself acknowledges by keeping policy and compliance explicitly out of scope. But it means several hard problems remain unaddressed.
What happens between auth and action
Authentication answers "who is this agent?" Authorization answers "is this agent allowed to call this tool?" But neither answers "should this specific request, with this specific content, at this point in this session, be allowed to proceed?"
An agent is authenticated. It has a valid OAuth token with scope to call the CRM tool. It sends a request to export all customer records to an external endpoint. The auth layer says: allowed. The governance layer, if one exists, would say: hold on.
The draft acknowledges that "monitoring, observability and remediation" should detect misuse patterns like "privilege escalation or unexpected action sequences." But it treats this as a monitoring problem, not a policy enforcement problem. The difference matters. Monitoring tells you after the fact. Enforcement stops the action before it completes.
Session-level behavior
OAuth tokens are request-scoped. An agent's risk profile is session-scoped. An agent that makes 3 tool calls in a session looks different from one that makes 300. An agent that gradually escalates its requests across a session, each one individually authorized, can end up somewhere no single authorization decision would have allowed.
The draft's Transaction Tokens are a step in the right direction because they bind authorization to specific transactions. But there's no concept of session-level policy: budget limits, escalation detection, loop guards, cumulative risk scoring. These are governance problems, not auth problems, and the draft correctly doesn't try to solve them. But they need solving.
Content inspection
The framework operates at the protocol layer. It cares about identifiers, credentials, and tokens. It does not inspect what the agent is actually saying to the LLM or what the LLM is saying back. Prompt injection, data exfiltration through model output, PII leakage in tool call arguments, secret exposure in generated code: these are all invisible to an auth framework.
This isn't a criticism. An IETF draft shouldn't try to solve content-layer security. But it's worth being explicit: solving agent authentication does not solve agent security. It solves one layer of it. A necessary layer, but not a sufficient one.
The SPIFFE bet
The draft's most consequential design choice is building on SPIFFE. SPIFFE (Secure Production Identity Framework for Everyone) is a CNCF graduated project that provides workload identity in cloud-native environments. It's mature, widely deployed, and operationally proven.
The bet is that agent deployments will look like cloud-native deployments: containers, orchestrators, service meshes, short-lived workloads. For enterprise agents running in Kubernetes or similar environments, this is a natural fit. SPIFFE is already there. The agent just becomes another workload with a SPIFFE ID.
Where this gets complicated is at the edges. Agents running inside IDE extensions. Agents running in browser environments. Agents spawned by low-code platforms like n8n or Make. Agents running on laptops as CLI tools. These don't have SPIFFE infrastructure, and bootstrapping it is non-trivial. The draft acknowledges this implicitly by allowing "operator assertions" as an attestation mechanism, which is essentially a fallback to "trust the deployment platform." That's pragmatic, but it means the security guarantees degrade significantly outside cloud-native environments.
What this means for practitioners
If you're building agent systems today, the draft gives you a clear direction:
Stop using static API keys as agent identity. They're not identity. They're payment credentials. Separate the two concepts. Even if you can't deploy full SPIFFE infrastructure today, you can start issuing short-lived credentials per agent and rotating them automatically.
Use OAuth 2.0 for tool authorization. Don't invent your own authorization protocol. If your agent needs to access a tool, the tool should accept an OAuth access token, not a forwarded API key. This gives you scoping, revocation, and audit for free.
Plan for session-level governance. Authentication and authorization will get standardized. Content inspection, session policy, and behavioral governance will not. These are where your internal engineering effort should go, or where you should look for purpose-built tooling.
Build audit trails now. The draft requires tamper-evident logs recording agent identity, action, authorization decision, and timestamp. If you're not logging this today, start. It's the one thing every compliance framework will ask for regardless of jurisdiction.
Where this goes from here
This is draft-00. It will evolve. The Security Considerations and Privacy Considerations sections are both marked "TODO," which tells you how early this is. But the architectural choices are sound, and the authors (from AWS, Zscaler, Ping Identity, and Defakto Security) bring the kind of identity and protocol expertise that makes the composition of existing standards credible rather than naive.
The more interesting question is adoption timeline. IETF standards move slowly. The EU AI Act's high-risk system requirements take effect on August 2, 2026. MCP, A2A, and other agent protocols are shipping now. There's a real risk that the industry builds its own fragmented identity layer before the standards catch up.
The draft acknowledges this tension in its introduction: "many of these efforts develop solutions in isolation, often reinventing existing mechanisms unaware of applicable prior art." That's a polite way of saying: everyone is building their own agent auth, and most of it is worse than what already exists.
The IETF draft is the first credible attempt to bring agent identity into the existing standards ecosystem rather than building a parallel one. Its core insight, that agents are workloads, is correct and consequential. Its scope, authentication and authorization only, is appropriately bounded.
What sits above that layer, the runtime governance that decides whether a properly-authenticated, properly-authorized agent should actually be allowed to do what it's about to do, is a different problem. It's the problem we're working on. But it's a much easier problem to solve when the identity layer beneath it is built on real standards rather than API keys and hope.
Read the full draft: draft-klrc-aiagent-auth-00
See governance at runtime
TapPass is in private beta. If your team is shipping AI agents, we'd rather get you on the product than in a pipeline.