March 12, 2025
The Rise of Agentic AI: When Machines Take the Lead, Who Do They Become?
The Rise of Agentic AI: When Machines Take the Lead, Who Do They Become?
You need to book a flight for an upcoming business trip. In the past, you’d search multiple airline websites, compare prices, check your frequent flyer benefits, enter your payment details, and manually upload your travel documents. Today, an AI agent can handle all of this autonomously.
As soon as you express your intent—whether through voice, text, or a calendar entry—your AI agent begins its search. It accesses your preferences, loyalty programs, and past travel history to find the best flight options, factoring in layovers, seat preferences, and even your airline meal choices. It checks visa and passport requirements, ensuring your travel documents are valid. Once the best itinerary is selected, the AI handles booking, secures payment, applies any available discounts or loyalty points, and even syncs the itinerary with your calendar.
At the airport, the AI ensures your digital boarding pass is ready, pre-checks you into the flight, and manages biometric verification at security gates—seamlessly linking to your decentralized identity to confirm you are the rightful traveler. If a delay occurs, it proactively rebooks connections and alerts ground transportation services, ensuring a smooth journey from start to finish.
This is the age of agentic AI—systems designed not merely to process data but to act on it, adapting and optimizing in real-time.
Unlike traditional AI, which follows predefined instructions, agentic AI operates with intent, reasoning through problems, making decisions, and learning from experience. These systems don’t just provide answers; they take actions. They schedule meetings, negotiate contracts, approve transactions, even write and deploy software updates—often without human intervention. They are not just assistants; they are actors in the digital ecosystem.
The Future is Now: Where Agentic AI Thrives
Agentic AI is already transforming entire industries by enabling systems to perform complex tasks autonomously, leading to increased efficiency and innovation across various sectors. On the fintech side, Klarna, Ally Bank, Sharesies, Enova and Upstart are some that have been very public on how they are using AI-powered chatbots and virtual assistants that provide personalized financial advice based on individual data and needs. Klarna seems to have taken it one step further, closer to the full loop I described above, announcing an advanced AI assistant that handles customer payments, refunds, and other payment escalations.
I am interested in the Klarna case because it is using AI to do more than recommendations and enhancing a chatbot and because of course, it is touching payments, which ultimately ties into identity management and my question – when AI acts on our behalf, how do we ensure it isn’t manipulated, corrupted, or impersonated?
Identity in an Agentic AI World
As we can see, agentic AI-based systems are no longer just tools—they are decision-makers, shaping our interactions with businesses, services, and even governments, and with this transformative power comes a critical challenge: trust.
We are already facing a fraud crisis. Deepfake attacks, synthetic identity fraud, and AI-driven social engineering scams have reached alarming levels, costing businesses billions and eroding public trust. Agentic AI introduces a new layer of complexity—if an AI agent executes a financial transaction on someone’s behalf, how do we prove that the request was legitimate?
What happens when a consumer claims they never authorized a payment? Today, we rely on account ownership records, device fingerprinting, and transaction logs to determine responsibility. But what if an AI agent was compromised by an injection attack—where a malicious actor subtly alters commands or identity tokens without detection? How do we trace accountability in a world where machine-to-machine interactions are seamless and increasingly autonomous?
A critical emerging challenge is machine-to-machine authentication—the process of verifying erifies the identity of machines—whether hardware or software—to enable secure, automated communication. It is widely used in IoT networks, enterprise systems, and automated business processes, ensuring that only authorized devices or applications can interact. Typically, machines authenticate using credentials like client IDs, API keys, certificates, or OAuth 2.0 protocols, receiving short-lived tokens (e.g., JWTs) to validate their identity and permissions. While this framework secures structured interactions, agentic AI introduces a new challenge—AI agents must authenticate dynamically as they make independent decisions and execute transactions. In other words, unlike traditional authentication, which relies on human verification, machine-to-machine authentication requires cryptographic mechanisms that allow AI agents to confirm each other’s identities before executing transactions or exchanging data. In practical terms, this means ensuring that when an AI-driven financial assistant sends payment instructions to a bank’s system, both parties can verify that the request is legitimate, has not been altered, and originates from a trusted source. Without this assurance, AI-to-AI interactions become a massive attack surface for fraud, impersonation, and unauthorized transactions.
Verifiable credentials (VCs) have been proposed as a solution, offering a cryptographic method to establish trust in digital interactions. A verifiable credential is a digital representation of an identity attribute, such as proof of employment, account ownership, or a credit score, issued by a trusted entity (e.g., a bank or government agency). These credentials are digitally signed and tamper-proof, allowing them to be independently verified without the need for direct communication with the issuing authority. However, while verifiable credentials add a layer of trust, they primarily are linked to devices rather than individuals. This introduces a gap: if an AI agent operates across multiple platforms or if a device is compromised or if an attacker has managed to take over someone’s credential, how do we ensure the credential remains bound to the correct entity?
To build true security for agentic AI, we need to think beyond credentials and toward biometric-bound identity systems—ones that ensure AI agents can act autonomously but remain provably tied to the right human from start to finish.
This is where decentralized biometrics and the circle of identity become indispensable.
Applying the Circle of Identity to Agentic AI
The Circle of Identity, a term I have coined, is a framework designed to provide continuous trust across the digital identity lifecycle.
It essentially works like this: Biometrics and other data collected at account registration origination are registered and bound to any credential, device, token or other asset. At the time of a given action, the same biometric is used to authenticate the user. The same goes for account recovery or reprovisioning private keys, digital assets or signing transactions. Unlike static identity solutions that rely on PINs, passwords, device biometrics and other weak authentication methods that can be compromised, the Circle of Identity enables biometric binding, cryptographic authentication, and real-time validation, making it a dynamic and adaptable security model. With the inclusion of deepfake and injection detection technologies, this approach ensures the identity of the person is verified, authenticated, and bound to a secure mechanism at the moment of creation and can then be trusted through any given interaction.
For agentic AI, this means that the trusted individual now can generate a biometric signature that can be linked to the agent to authorize its activity. However, to be clear, this does not mean the biometric itself is continuously exposed or stored in a vulnerable way. The use of privacy-enhancing technologies (PETs), such as multi-party computation (MPC) and zero-knowledge proofs (ZKPs), eliminate the need to store biometrics holistically and prevent them from being tampered with or stolen. The same PET techniques can also be used to safely manage tokens and to issue them dynamically but yet, still be tied to a persistent identity.
The approach of biometric-bound agents ensure that:
- Every action taken by the AI is cryptographically linked to the original human approver through a dynamic, time-bound biometric signature.
- Each transaction or action generates a unique token—meaning that even if an attacker intercepts one, they cannot reuse it for another transaction.
- The biometric remains consistent, but the identity token evolves, reducing the risk of theft, replay attacks, or impersonation.
This ensures that even if an AI agent is operating autonomously, its actions remain traceable and verifiable, and any compromise can be quickly mitigated.
How Anonybit Plays Into This
Anonybit provides a privacy-first, decentralized identity infrastructure that is uniquely suited to securing agentic AI interactions. Unlike traditional biometric systems, which rely on centralized storage and create honeypots for hackers, Anonybit’s patented system fragments and distributes biometric data across a multi-party cloud environment. This ensures that identities—whether human or agents—remain secure, verifiable, and unspoofable and immune from insider threats and quantum computing attacks.
Anchoring the Circle of Identity, Anonybit serves as a persistent, privacy-preserving service layer that enterprises can invoke across the user lifecycle—whether authenticating a customer in a call center, verifying an identity online, or securing access at a physical location. This capability extends beyond AI agents, ensuring seamless, trusted authentication at every touchpoint.
Anonybit also supports all biometric modalities—including face, voice, fingerprint, iris, and palm—allowing enterprises to choose the most suitable authentication method. This flexibility enables both consumer and workforce applications, ensuring security for AI-driven financial transactions, workforce automation, and customer engagement. As agentic AI proliferates across industries, from automated financial services to autonomous enterprise workflows, having a privacy-first, biometric-bound identity infrastructure regardless of modality becomes indispensable.
Conclusion: The Architecture of Trust in an AI-Driven World
Agentic AI represents an extraordinary leap forward, reshaping industries from finance to healthcare to cybersecurity. But as machines begin to make decisions that impact real lives, the need for identity assurance, fraud prevention, and privacy protection grows exponentially.
We can see it already: the future of AI is one of autonomy—but autonomy without trust is a recipe for exploitation. And trust will not be a byproduct of AI advancement—the opposite, trust is the foundation upon which the future of AI must be built.
Those building agentic AI solutions, capabilities, and the systems that rely on them must recognize the implications of what is being unleashed. By embedding decentralized biometrics, cryptographic identity binding, and continuous authentication into the AI ecosystem, we can ensure that agentic AI operates not just with intelligence, but with integrity.
At Anonybit, we are doing our part to enable industry and stakeholders, providing the foundational infrastructure that ensures AI-driven interactions remain verifiable, privacy-preserving, and resilient against emerging threats. As agentic AI rapidly takes shape, we are committed to building the guardrails that will define a secure and trustworthy future.
To learn more, contact us.