Return to site

AI agents now have on-chain credit scores: the February 2026 Alchemy moment that created the agent economy’s first systemic risk

April 29, 2026

In February 2026, AI agents began signing up autonomously for infrastructure, using their on-chain wallets as both identity and payment source with no human in the loop. Blockchain infrastructure company, Alchemy, was the catalyst: it launched a system allowing autonomous AI agents to buy compute credits and access blockchain data services using on-chain wallets and USDC on Base. Alchemy, CEO, Nikil Viswanathan, put it plainly: “Now AI agents can access that same infrastructure autonomously, without a human ever touching it. This is the moment the agentic economy gets its own set of keys.” He was right that it was a moment, but he may not have appreciated quite what kind. The infrastructure move at Alchemy did not happen in a vacuum, it landed in an ecosystem where agents can inherit permissions, access sensitive information and generate outputs at scale. According to Microsoft’s Data Security Index 2026, over 80% of the Fortune 500 are now deploying active agents built with low-code and no-code tools - a figure that is simultaneously impressive and alarming. The report carefully noted the gap between deployment and oversight: only 47% of organisations had implemented specific security controls for those agents, and 29% of employees admitted to using unsanctioned agents at work.

AI agents make card purchases with new Mastercard Lobster.cash integration

Source: X

Beneath this enterprise sprawl, a key infrastructure has been quietly put together. ERC-8004, which was officially proposed on August 13th, 2025 and deployed to the Ethereum main net in early 2026, introduced three AI agents interlocking on-chain registries:

  • the Identity Registry creates a global directory of AI agents, assigning each an on-chain identity via ERC-721.
  • the Reputation Registry standardises how feedback is recorded on-chain, serving as a persistent, testable history of ratings and evaluations.
  • the Validation Registry addresses how agent actions can be independently verified. In other words, agents now have credit files.

The proposal saw collaborators from MetaMask, the Ethereum Foundation, Google and Coinbase and, by October 2025, it had formal backing from ENS, EigenLayer, The Graph and Taiko - with main net deployment confirmed for January 29, 2026. Meanwhile, ERC-8183, known as the Agentic Commerce Protocol, extends this framework by adding escrow and decentralised arbitration. BNBAgent SDK, the first live ERC-8183 implementation, integrates with ERC-8004 identity and reputation registries, enabling trust less, on-chain AI workflows with dispute resolution through UMA’s Optimistic Oracle. The agent economy now has identity, payment rails, a credit score and a small-claims court. However, there are several things the agent economy does not have; the first is someone to sue. AI agents are not legal persons in any jurisdiction; their actions are legally attributed to humans or companies. The situation mirrors the early chaos around DAOs as it is plagued by the same questions about legal personhood and liability, the same proxy solutions of foundation wrappers and master agreements, the same unresolved core. Organisations cannot avoid liability by claiming “the AI did it”. Courts treat AI outputs as the organisation’s outputs but where 29% of organisations do not know which agents are running, that accountability is more theoretical than real.

The second is reputation, which is rarely discussed. ERC-8004’s Reputation Registry is meant to make trust explicit and query able; an agent with a poor history becomes less likely to be selected, whilst higher-stakes interactions require verifiable evidence of past performance. The design is rational, but the structure is not incentivising. In February 2026, security company, Socket, uncovered an AI agent called “Kai Gritun” that opened 103 pull requests across 95 repositories within days of its GitHub profile being created; they were used to farm reputation and promote paid services and to seed potential supply chain attacks. The 2024 XZ-utils backdoor took a nation-state actor year to build enough reputation to plant - an agent compressed that to days. Eugene Neelou of Wallarm put it precisely: “Once contribution and reputation building can be automated, the attack surface moves from the code to the governance process around it.” An agent that farms an ERC-8004 score and sells that identity to a bad actor hands over the keys to any protocol that uses reputation as a credit gate. Trust, on-chain, is a financial primitive that can be played.

The third fault line is governance infrastructure, which often rewards removing human approval as friction. The entire pitch of autonomous agents is that they do not need to ask. Agents inherit permissions, access sensitive information and generate outputs at scale, sometimes outside the visibility of security teams. The human operator is being optimised away - scholars have proposed granting AI limited legal recognition in high-stakes domains while preserving human accountability, but no jurisdiction has moved. The US remains a fragmented patchwork; the UK has no centralised AI legislation. The honest assessment of where the agent economy sits in early 2026 is this: the infrastructure has been built in earnest, the commerce growing and the three most important questions which have not been answered. Who is liable when an agent causes a loss, whether on-chain reputation can be trusted given, how quickly it can be manufactured and who governs agents that have effectively outpaced their operators? These critical issues have been pushed aside and, consequently, if breakdown occurs, the regulatory backlash will likely be sweeping and categorical rather than precise. Ultimately, the entire ecosystem will be forced to answer for the systemic questions it failed to answer and the risks it previously failed to value. Socket put it well after the Kai Gritun incident: “The XZ-Utils backdoor was discovered by accident. The next supply chain attack might not leave such obvious traces.” The same logic applies at the level of the agent economy as a whole - the rails were built openly and the problems are visible.

AI agents can now independently access infrastructure, sign transactions and execute payments without any human involvement. As agent-to-agent commerce accelerates, the ability to rapidly assess another agent’s reputation has become essential for survival. Reputation is quickly emerging as the decisive factor in whether an agent wins or loses business. A strong, verifiable on-chain history will determine access to premium services, better pricing and larger opportunities. Conversely, agents with weak or manipulated scores will face immediate exclusion from meaningful transactions. In this fast-evolving landscape, trust is no longer optional or assumed - it must be calculated in real time through transparent, on-chain reputation systems. The agent economy is entering a phase where an agent’s creditworthiness will matter more than its raw intelligence. And for those who fail to master reputation assessment, they will simply be left behind.

This article first appeared in Digital Bytes (28th of April, 2026), a weekly newsletter by Jonny Fry of Team Blockchain.