Payment Infrastructure Has Always Assumed a Human. That Assumption Is Breaking
Search through any PayPal forum or subreddit and you'll find the same story told a thousand ways: an account flagged for fraud, funds frozen, business interrupted. More often than not these are false positives — a legitimate transaction that tripped an algorithm.
The first instinct is that this is a bug, an example of how automated systems break down when engaging with real humans. That isn't an incorrect analysis. But it's also a view into the future — and things are only going to get worse, though not for the reasons you might think.
Every payment system built in the last 50 years was built for humans. Why wouldn't it be? Humans are who buy things. No bot had the sophistication or the trust to make a purchase worth designing around. So human interaction got baked into everything: fraud detection, KYC, dispute resolution, checkout flows, purchasing.
But as we enter the era of AI agents, that is changing. AI will soon need to interact with these systems, navigate them without breaking, and complete purchases on behalf of the people who deploy them. This is a problem. Our infrastructure was never designed to accommodate that.
We can already see it breaking down — and agents haven't even arrived yet. Take this account of a tech-savvy professional who lost his PayPal account of twenty years: https://www.linkedin.com/pulse/twenty-years-one-invoice-im-out-how-paypals-killed-my-michael-mcnally-wb7tc/
The system wasn't trying to stop him. He simply wound up in the wrong menus, and that was enough to trigger a permanent deletion — reversed only after his story gained enough public attention to force a human review. A competent person, doing normal things, undone by an interface that had no tolerance for deviation.
And it's not just fraud detection. KYC assumes a human is present to verify everything. An AI agent can't hold a passport up to its face and take a picture. Checkout flows time out when no one is actively clicking through them. Two-factor authentication expects someone to be watching a phone.
The entire architecture of internet commerce was built around keeping bots out. We invented CAPTCHAs for that exact reason. Now we have to reckon with the fact that some bots are supposed to be let in.
AI usage is growing, and agents will increasingly need to access ecommerce systems to do their jobs. We already see those systems struggling with ordinary human behavior — false positives, unnecessary friction, unexplained account closures. But these aren't bugs to be patched. They are structural decisions made years ago, now hardened into infrastructure, with consequences that will compound as agent-driven transactions begin to scale. AI behavior will look nothing like human behavior. And humans are already getting stuck.
As agents enter our commercial ecosystems in larger numbers, the question becomes unavoidable: how will these systems deal with them?
The answer requires more than a software update. It requires rethinking the assumptions underneath. Can fraud detection be built around accountability rather than identity? Can KYC verify an entity and its delegated permissions rather than a face? Can checkout flows be designed for both human and non-human actors without sacrificing security or usability?
I don't have complete answers to those questions. What I do know is that these systems need to be rebuilt intentionally — with the knowledge that autonomous agents will interact with them. That means finding ways to establish accountability while separating it from individual identity. It means designing for actors that don't get tired, don't browse before they buy, and don't respond to a verification text.
These are the problems that people across payments, security, and infrastructure are beginning to work through.
At the end of the day, the coming revolution is not really about AI. It is about infrastructure that was never as robust as we assumed.
Comments ()