AI agents are beginning to act on behalf of users inside banking applications - initiating transfers, retrieving account data, filing disputes. But the authentication infrastructure those agents rely on was built for humans. OAuth delegation was designed for apps, not autonomous decision-making software with the ability to take consequential financial actions. Passkeys and the emerging standards around agentic authentication offer a path forward, but financial institutions need to engage with this problem now, before the attack surface is fully visible.
The authentication challenges financial institutions have been navigating for the past decade have largely been human-centric. How do you verify that the person logging in is who they claim to be? How do you authenticate a transaction without creating friction that causes customers to abandon the flow? How do you balance security and experience in a world where both fraud sophistication and customer expectations are rising simultaneously?
Those questions remain urgent. But a new class of authentication challenge is arriving alongside them, and it is moving faster than most security teams have had time to prepare for: how do you authenticate an AI agent acting on behalf of a user?
This is not a theoretical question. AI-powered assistants capable of taking real actions inside financial applications - initiating wire transfers, pulling account history, filing disputes, setting up recurring payments - are moving from research to production. The customers using them are granting those systems delegated access to their financial accounts. And the authentication infrastructure being used to enable that access was designed for a different kind of principal entirely.
The term 'AI agent' covers a wide range of capabilities, and it is worth being precise about what the authentication problem actually involves. A simple AI assistant that answers questions about your account balance, drawing on read-only data, raises limited authentication concerns beyond the standard session management questions any API-connected application faces.
The authentication challenge becomes acute when agents move from reading to acting. An AI system that can initiate a payment, move funds between accounts, execute a trade, or modify account settings is not just accessing data on a user's behalf - it is taking actions that are financially consequential, potentially irreversible, and that expose the institution to liability if something goes wrong. At that point, the question of how the agent was authenticated, and how its specific actions were authorized, becomes a core security question rather than a background infrastructure concern.
The deployment models for these systems in financial services are still evolving, but several patterns are already visible. Some institutions are building AI-powered assistants into their own applications, giving customers a natural language interface to their account capabilities. Others are integrating with third-party AI platforms that users are granting access to via OAuth-based authorization flows. In both cases, the authentication question is the same: what establishes that the agent has the right to take a specific action at a specific moment?
OAuth 2.0 is the de facto standard for delegated authorization on the web. It is the mechanism that allows a user to grant a third-party application access to resources on their account without sharing their credentials directly. It is widely deployed in financial services, and it underlies most of the open banking infrastructure that has been built over the past several years.
OAuth was designed to solve a specific, well-understood problem: authorizing application-level access on behalf of a human user who is present at the moment of authorization. The user sees a consent screen, grants specific permissions, and the application receives a token that represents that grant. The human is in the loop at the authorization moment. Revocation is possible. Scope can be limited.
Agentic AI creates pressure on every one of those assumptions. The user may authorize an agent to act on their behalf in a general sense - 'manage my finances' or 'handle my routine transfers' - without having visibility into every specific action the agent will take or every moment at which it will act. The agent may be operating autonomously, making decisions and taking actions without a human in the loop for each transaction. The grant is broad, the actions are specific, and the user is not present when the agent exercises the delegated authority.
That creates several distinct risks. Token theft - where an attacker intercepts or extracts the OAuth token the agent holds - is one. The agent as an attack vector is another: a compromised or manipulated AI system that has been granted broad financial access is a high-value target for adversarial manipulation. Scope creep, where agents gradually acquire broader access than was originally authorized, is a third. And the accountability question - when something goes wrong, which action taken by which agent was the proximate cause - is genuinely difficult to answer with standard OAuth audit logs.
Passkeys do not solve the agentic authentication problem by themselves, but they are a foundational piece of a solution architecture that does. The relevant properties are their cryptographic binding, their resistance to phishing and token theft, and their ability to scope authentication assertions to specific relying parties and specific contexts.
The most promising near-term architecture for agentic authentication in financial services involves several elements working together. The first is a human-anchored authorization step: before an agent can take consequential actions, the human user authenticates with a passkey and explicitly authorizes the agent's scope of action. That authorization event is cryptographically logged - not just an OAuth token issuance, but a signed assertion that a specific user, authenticated via a specific passkey, authorized a specific agent to take a defined set of actions.
The second element is stepped authorization for high-consequence actions. Rather than treating a broad authorization grant as permission for all actions within scope, the system requires stepped confirmation for actions above a defined consequence threshold. A recurring small transfer might execute automatically within an approved authorization. A large one-time payment, or an action in a new merchant category, triggers a request back to the human for explicit confirmation - ideally via a Secure Payment Confirmation-style prompt that shows the specific action and requires biometric confirmation.
The third element is short-lived, narrowly scoped credentials for agent actions. Rather than issuing a long-lived OAuth token that an agent holds and exercises over time, each agent action uses a short-lived credential scoped to that specific action - generated from the underlying human authorization, but not reusable for other purposes. This limits the blast radius of a compromised agent: an attacker who extracts the credential has something that works for one specific action, not a skeleton key for the account.
The authentication standards community is aware of the agentic problem and is beginning to address it, but the work is early. The FIDO Alliance's work on passkeys provides the credential foundation. The OpenID Foundation's ongoing work on identity standards - including specifications like GNAP (Grant Negotiation and Authorization Protocol) and the developing conversation around OAuth extensions for agentic contexts - is relevant to the authorization delegation piece.
NIST's digital identity guidelines (SP 800-63) provide a framework for thinking about identity assurance levels that applies to agentic contexts, but the specific guidance for autonomous agent authentication is not yet fully developed. The emerging conversations in standards bodies around 'delegated credentials' and 'transaction authorization' are directly relevant, but they have not yet produced the stable, widely-implemented specifications that OAuth and WebAuthn represent.
That gap - between the deployment reality of AI agents in financial applications and the maturity of the authentication standards designed for them - is the risk window that security teams need to be thinking about now. The institutions that will handle agentic AI authentication well are the ones that establish their architecture principles before the agents are fully deployed, not the ones that retrofit authentication controls onto an agent ecosystem that has already grown beyond easy management.
The agentic authentication problem does not require waiting for fully mature standards to begin building defensible architecture. Several practical steps are available today.
The first is treating agent authorization as a first-class authentication event rather than a subset of existing OAuth grants. Every moment at which a human user authorizes an AI agent to act on their behalf should be treated as an authentication ceremony that is cryptographically logged, specifically scoped, and revisable at will. Building that infrastructure now, before the agent ecosystem expands, creates the accountability foundation that security teams and regulators will eventually require.
The second is applying the principle of least privilege aggressively to agent credentials. Agents should hold the minimum authorization necessary to perform their defined function, for the minimum time period, with explicit re-authorization required when scope or duration expands. This is standard security practice, but it requires deliberate architectural choices that are much harder to retrofit than to build in from the start.
The third is planning for the adversarial manipulation scenario specifically. A human authenticating with a passkey is protected from phishing because the credential is bound to the relying party - the credential simply will not work on a fake site. An AI agent is potentially vulnerable to prompt injection or adversarial instruction - manipulation that causes it to take actions outside its authorized scope without the human user's awareness. Designing authentication architectures that require explicit human confirmation for high-consequence actions is the mitigation, and it requires that the authentication infrastructure for step-up confirmation be in place before the agents that would trigger it are deployed.
Agentic AI authentication is an early-stage problem, which is precisely why it deserves early-stage attention. The authentication challenges that financial institutions are still managing today - account takeover via credential stuffing, SIM swap fraud, phishing-resistant MFA deployment - did not become widely recognized as urgent until the attack patterns were well-established and the fraud losses were measurable. By then, the defensive infrastructure was catching up to a threat that had already been operating at scale.
The institutions that engage with the agentic authentication question now - before AI agents are processing high volumes of financial transactions, before the adversarial community has fully characterized the attack surface, before regulators have developed specific requirements - are the ones that will have defensible architecture when all of those things arrive. The question is not whether AI agents will need robust authentication infrastructure in financial services. The question is whether that infrastructure will be ready when the agents are.
FIDO Alliance: Passkeys - fidoalliance.org
FIDO2 Technical Specifications - fidoalliance.org
W3C Web Authentication (WebAuthn) Specification - w3.org
OpenID FAPI 2.0 Security Profile - openid.net
Most orgs running OTP-based MFA have 3–4 exploitable gaps they don’t know about. Our Authentication Assessment takes 2 minutes and shows you exactly where you stand — plus a phased migration roadmap.
Take the Assessment →Our 2-minute assessment scores your authentication setup and shows you exactly where the improvements are.
See Your Score →