The FIDO Alliance Just Drew the Blueprint for Agentic Commerce. Here's What They Missed.

The FIDO Alliance recently announced that it will develop formal standards for trusted AI agent interactions is a welcome milestone for the identity industry. If you've been watching the space, you know that "agentic AI" has been the subject of a lot of handwaving. With FIDO’s plans to develop standards, they advanced the conversation from a philosophy to actual infrastructure.
First off, kudos to The FIDO Alliance. An industry-approved identity blueprint has been sorely needed, and it will support better interoperability that will reduce fragmentation risk and give end-users safer, more consistent experiences. But their plans have holes, and we are dedicated to closing the gap by advocating for a slightly different framework.
What FIDO actually announced
Two protocols are now being folded into formal FIDO standards work: AP2, Google's payment protocol for AI agents, and Verifiable Intent (VI), the trust framework developed by Mastercard in collaboration with Google. Both are being developed across two working groups, Payments and Agent Authentication, which gives this effort the kind of multi-stakeholder legitimacy that turns "emerging spec" into "industry backbone." The analogy FIDO draws is deliberate and apt: passkeys became the standard implementation for WebAuthn. AP2 and VI are positioned to do the same for agentic commerce.
Together, these protocols define a cryptographic delegation chain. An issuer vouches for a user. The user authorizes an agent, all within defined limits: spend ceilings, permitted merchants, time windows. The agent executes strictly inside those limits. It's an elegant model that translates the intuitive idea of "I'm letting my AI handle this" into something machines can verify and enforce — in effect, a permission slip.
The fact that FIDO is formalizing this across both a payments working group and an agent authentication working group is significant. It signals that the industry must stop treating AI agent trust as a niche edge case, and instead, treat it as a core infrastructure layer for how commerce and data exchange will work in an agentic world.
The bigger story: What these protocols don't do
AP2 and VI specify the shape of the credential chain. They are carefully and deliberately silent on one foundational question: how do you verify the human at the beginning of that chain?
The protocols assume a trustworthy entity has already issued the foundational identity credential. The specs today only go as far as, "this key belongs to a real human." But to be clear, it’s not a flaw in the design. Rather, it's a scoping decision, and they're explicitly leaving the identity layer as a solved-elsewhere problem.
This is where the issue gets conflated with different intended outcomes: a cryptographic chain is only as strong as its anchor. If the key at the root of the chain is associated with an unverified, synthetic, or fraudulently claimed identity, the entire delegation architecture, no matter how cryptographically elegant, becomes a mechanism for laundering bad actors into trusted interactions. Garbage in, garbage out, by another name.
This isn’t an issue of ignoring technical complexity. It's about the consideration, and employment of, an effective approach to identity.
Any protocol that delegates real-world power to an AI agent has to be able to answer four questions, with proof, at any point in the transaction lifecycle:
- Who is this agent acting for? (consumer identity)
- Is the business on the other end legitimate? (business identity)
- What can this agent spend? (payment authority)
- What is it actually permitted to do? (authorization scope)
VI and AP2 answer the last two with precision. They leave the first two, which are the true identity questions, explicitly open.
The right mental model: Peer-to-Peer, with the LLM in the middle
One frame I keep coming back to when thinking about agentic systems: everything in agentic commerce should be thought of as peer-to-peer, with the agent runtime as the intermediary. The consumer on one end, the business on the other, and the AI agent(s) executing somewhere in between.
That framing clarifies the trust problem immediately. In a traditional transaction, both parties have visibility into each other; identity signals flow in both directions. The consumer knows, for example, that they're on a bank's website. The bank knows they're talking to a verified customer. When an AI agent intermediates that transaction, those bilateral trust signals need to be preserved and extended to the agent layer. The chain of custody for identity has to run from the moment of verification and delegation all the way through to dispute resolution.
A more complete blueprint
The FIDO announcement is a great call to action for identity infrastructure providers. The protocols have drawn the skeleton; the identity layer is the connective tissue.
Prove's position here is straightforward. We generate, manage, and leverage keys associated with real, verified entities (people and businesses) and we've been building toward a world where those keys can serve as the foundational credential layer for exactly the kind of delegation chains VI and AP2 describe. Our trust registry and trust key management capabilities are the direct answer to the gap these protocols leave open.
Keys and devices are how longitudinal trust actually works online. They persist across sessions, contexts, and platforms. They prove possession, but not just of a device. They include identity attributes, payment methods, consent records, and reputation signals over time. This is fundamentally different from point-in-time authentication. It's trust as a continuous, verifiable state.
When an AI agent initiates a transaction, the relying party on the other end needs to know: this agent is acting for a real, verified person. That person's identity attributes have been confirmed. Their payment authority is legitimate. Their consent to delegate this action is on record. And if something goes wrong, there's a traceable chain from action back to actor.
That needs to be understood as the prerequisite for agentic commerce to function at scale.
What identity professionals should be watching
For those of us who've spent careers thinking about identity infrastructure, the FIDO announcement is a signal to get moving on a few things.
The trust registry problem is about to become urgent. As agents proliferate, relying parties will need a fast, reliable way to resolve whether the identity behind a delegation chain is trustworthy. The mechanisms for building and querying those registries need to be in place before the volume hits.
The credential issuance gap is real. AP2 and VI assume a trustworthy issuer at the root of the chain. The industry needs to get serious about who plays that role, what standards they're held to, and how their credentials interoperate across the delegation chain.
And the dispute resolution layer is underspecified across the board. When an AI agent makes a purchase the consumer didn't authorize, or a merchant gets defrauded by a synthetic identity with a valid-looking credential chain, who bears the liability and how is it adjudicated? The identity infrastructure needs to carry the evidentiary chain all the way through, not just to the point of transaction initiation.
FIDO has given the industry a credible, serious framework for how AI agents will be authorized to act. The next move belongs to identity providers who can answer the questions that framework left open.

Keep reading
Prove and Velocity have partnered to bring trusted identity verification to the global stablecoin economy, helping enterprises, banks, and payment providers enable secure, compliant cross-border payments and treasury operations.
Read the article: Spinwheel Launches New Credit Data AI Lab; Announces Prove as Founding Partner to Support Agentic AI Innovation for Financial Institutions and FintechsSpinwheel’s Credit Data AI Lab also integrates with Prove’s industry-leading identity platform to anchor on a foundation of trusted identity.
Prove has joined the Better Identity Coalition to help shape the future of secure, privacy-first digital identity in the United States. In this announcement, Chief Legal Officer Mitch Bompey outlines Prove’s commitment to advancing trusted identity standards that improve security, accessibility, and consumer trust in the digital economy.