While agentic AI stands to usher in a new online shopping era, payments companies fret that it could turbocharge payment fraud.
Payments professionals attending the Money 20/20 conference in Las Vegas last week foresee bad actors using AI to perpetrate payments scams on a larger scale.
Executives said programs available on the internet help aspiring criminals do everything from write malicious code to create bots that run scams for them, as an example of what could happen once agentic programs become widespread.
Payments companies, e-commerce merchants and tech companies are all seeking to develop artificial intelligence agents authorized to shop on behalf of consumers. That age is coming soon, Worldpay Chief Product Officer Cindy Turner said during a panel discussion at the conference on Oct. 26.
If any AI agent makes a purchase, “how do I know a fraudster hasn’t gotten hold of that agent and is trying to stuff it with bad credentials?” she said.
That was a frequent topic of conversation at the annual fintech gathering.
AI programs specifically intended for fraud are available for a small subscription fee, said Nash Ali, head of operational strategy for the credit bureau Experian, during a Money 20/20 panel.
“We're sitting on the precipice now of another explosion in fraud with agentic AI coming our way,” he said.
Ali singled out a program that already exists, called FraudGPT, which he said could increase incidents of employment and social engineering scams that bedevil consumers and the payments industry.
“That’s out there for a $1,400 [annual] subscription,” he said. The program uses “amazing tools that can do terrible things to people and enterprises that are not ready.”
Now those tools could leverage agentic AI to make fraudulent purchases, Ali said.
Synthetic agents powered by generative artificial intelligence can search the web for specific products under certain specifications. Consumers already use programs such as ChatGPT and Perplexity to find recommendations, but AI agents empowered to make purchases on a consumer’s behalf will someday be widespread, according to payments executives. The work to create AI-driven commerce poses logistical, liability and safety concerns even before considering the prospect of fraud.
“How do you know that the agent did what you were expecting it to do?” Hilary Packer, head of enterprise data and artificial intelligence for American Express, said in a video interview at Money 20/20. “The example I like to use is, I'm going to empower my agent on my behalf to shop for a red dress when the price drops below $100 and then put it in my cart. What happens if what shows up at my house is a blue dress?”
It’s a problem that payments executives who attended Money 20/20 said must be addressed as the use of agentic AI spreads.
It’s not clear how long FraudGPT has existed, but media outlets first wrote about the program two years ago. Its name is a play on ChatGPT, the artificial intelligence chatbot that brought generative AI into the mainstream.
“We're seeing [generative] AI being used to commit fraud at a pace that’s unprecedented,” Ali said. “It’s no longer a human sitting and committing fraud on an individual basis.”
Artificial intelligence can create all manner of tools that bad actors can use to trick consumers and businesses into sending them money, Yuliya Kazakevich, global head of merchant risk for Block’s Cash App, said in a separate Money 20/20 panel.
“We're all seeing deep fakes and synthetic identities,” she said.
One of the solutions, Kazakevich added, is using the very technology that fraudsters are employing.“We use AI to detect AI,” she said. “We use AI to detect the content that’s created by the machines, and then we’re using that to identify behavioral patterns that can help us stop that from happening.”
Merchants and payment companies already have ways to authenticate users and should still lean on those processes, Turner stressed in the Sunday panel.
“As much as possible, I always want to use the existing protocols that are out in the market,” she said.