Dan Muller is CEO of the Chicago-based payments fintech Aeropay.
There’s a lot of noise right now around AI and payments. Headlines about fraud detection getting smarter, checkout fading into the background, and the words “powered by AI” seem to fill every feed.
Some of it’s true, but AI isn’t going to save us from fraud. It’s not going to stop a bad actor from pretending to be someone else, and it won’t protect your customers when deepfakes become convincing enough to pass a security check.
We’re already seeing glimpses of that future. Artificial identities. Cloned voices. These aren’t hypothetical scenarios anymore. They’re signs of how quickly AI is being used to outsmart the very systems meant to keep us safe.

Can a deepfake AI pass Know Your Customer (KYC) verification? What happens when a model can mimic a face, blink on camera, and read a driver’s license number out loud? What if AI tools can generate synthetic merchants with legitimate-looking websites and transaction histories?
And then there’s the human element. Picture a cloned voice calling a customer support line. It sounds like the account holder, right down to the background noise and tone. How confident are we that a rep would catch it before processing the request?
But the narrative keeps pointing to more innovative models as the solution, rather than addressing the obvious.
AI doesn’t pick your partners. It doesn’t build your fraud controls. It doesn’t care if you end up trusting the wrong system. That’s the part people aren’t paying enough attention to.
Deepfake-level imitation, or agentic AI, isn’t a niche problem anymore. It’s coming for all of the systems we use that are built on trust.
Think about a fake merchant onboarding before a human even reviews it, or an AI model trained on customer service interactions that learns to steer a conversation toward approval.
The same innovation that enables more efficient payments also allows for more sophisticated fraud. So while everyone’s talking about how AI will make transactions faster, safer, or more personalized, the question should really be: Who’s making sure those claims hold up when the fraud gets smarter too?
The next phase of innovation is going to be about discernment, not necessarily having the most advanced model. Knowing who to trust and which partners are equipped to defend against the risks that come with new technology.
AI will absolutely continue to shape payments. But the companies that will succeed are addressing the more complex questions and implications of AI, rather than just touting it.
Who built the systems you rely on? How do they respond when something fails? Do they see fraud as a PR problem or a product problem? Those answers matter more than another round of “AI-powered” headlines.
From my (human) point of view, AI can make payments smarter, but it’s still partnerships and the right infrastructure that make them work.