Identity architecture for AI agents requires answering questions that permission models don't address.
First, provenance. Where did this agent come from? What model is it running? Who trained it, and on what data? When was it last updated? These aren't metadata curiosities. They're the audit trail that lets an organization trace a bad decision back to its source.
Second, attestation. How do we know this agent is what it claims to be? When Agent A communicates with Agent B across organizational boundaries, what prevents impersonation? Traditional API authentication assumes the caller is a system under your control. Agentic interactions assume nothing.
Third, lifecycle. Agents aren't static. They learn, adapt, and sometimes drift. An agent that passed validation six months ago may behave differently today. Identity architecture must include mechanisms for continuous verification, not just initial provisioning.
A single AI agent with scoped permissions is manageable. A hundred agents interacting across an enterprise is a coordination problem. A thousand agents negotiating across organizational boundaries is a trust architecture challenge that permissions alone cannot solve.
Consider what happens when your procurement agent interacts with a supplier's fulfillment agent. Both agents have permissions within their respective organizations. But the interaction itself exists in a trust boundary that neither organization fully controls.
Who verifies the other agent's identity? Who audits the negotiation? Who is liable if the interaction produces an outcome neither organization intended?
Permission models assume a central authority that grants and revokes access. Identity architecture acknowledges that in a world of autonomous agents, there is no central authority. Trust must be negotiated, verified, and continuously maintained between parties who may never fully control each other's systems.