Identity & Fraud

Breaking the Code: How Criminals Use AI to Simplify Fraud and How to Fight It

March 12, 2026 | Brady Harrison
Reading Time: 4 minutes

Highlights: 

  • Criminals are using AI and generative AI to automate, simplify, and scale complex fraud, making attacks like promo abuse, card testing, and synthetic identity creation highly sophisticated and difficult for traditional systems to detect.

  • Countering this advanced threat requires businesses to deploy modern, AI-backed fraud detection systems alongside best practice safeguards like strong security checks, limits on guest checkouts, and team education.

Fraudsters have discovered the power of AI when it comes to bypassing initial checks. They now use advanced technology to simplify much of the manual work that went into creating and executing scams to steal from your business and your customers. AI empowers criminals to better hide their tracks and thwart basic detection systems.

In part one of this blog series, we looked at how AI is now being leveraged by sophisticated fraudsters to execute more automated attacks. In this part, we’ll examine some examples of these high-level schemes and how you can fight it.  

Criminals use artificial intelligence to simplify fraud

Using artificial intelligence, criminals obfuscate and rework information. Generative AI simplifies the process of masking identities to bypass initial fraud checks, even if those changes would seem obvious to experienced managers and fraud experts. Criminals find it easier than ever to appear as legitimate customers or attempt fraudulent transactions without revealing much of their operations.

Example: AI-influenced Promo Abuse Fraud

Fraudsters abuse artificial intelligence for promo use fraud. They seek to undermine your well-intentioned efforts to reward new customers and drive sales. When you set up a discount code for new users, or any limited-use coupon, you expect it to capture new sales and build loyalty, but these criminals see it as an opportunity to get unearned discounts and enrich themselves at your expense.

By masking emails, fraudsters sign up for unlimited single-use promotions. Using AI, they rapidly generate hundreds or even thousands of email and mailing addresses in seconds. Without AI defenses, these can pass initial checks and evade blacklists.

Mail servers recognize all of these as the same address and deliver email to the same account. Yet, to many basic detection systems, each represents a unique identity and a new customer in the system. Such simple masking tools can be deceptively effective, especially when deployed at scale. When they work, suddenly your business is regularly giving away huge discounts or promotional items meant for only a single use.

Methods of Stopping This Abuse

Such a series of attempts may seem obvious to you as an experienced business owner or manager. Few have the time to monitor each transaction for simple tricks, however. The time it takes fraudsters for each attempt can be measured in fractions of a second, while manually correcting the problem takes far longer. Here are a few things you can do to help thwart promo abuse before deploying AI:

  • Educate and train your team: It’s imperative to identify and shut down this type of fraud before it hits your bottom line. Teach your team how to recognize and report promo abuse.

  • Set clear terms and conditions for promotions: Add provisions to limit discounts and rewards to a single person or physical address. Include a note that masking violates these terms.

  • Go beyond basic email or physical address checks: Turn to modern tech that verifies device IDs, geolocation, and other factors to help filter illegitimate attempts.

These important safeguards help protect your revenue during any type of promotional campaign. You rely on your experience to inform your detection systems and your team, even if you can’t personally evaluate every transaction. Now, AI can deliver a similar element of “experience” in a timely and effective manner.

Fraudsters Deploy AI to Automate Large-scale Attacks

Criminals looking for outsized gains turn to AI automation to operate at scale. AI tools for automation handle much of the tedious — and often risky — distribution aspects of fraud and scams. These advanced bots do far more than their predecessors. The most dangerous also employ machine learning to refine their techniques.

Example: AI Automated Card Testing Fraud

AI automation is highly prevalent in card testing fraud. Fraudsters obtain lists of card numbers and attached identities, then begin reviewing them for validity. AI automation simplifies this process by making hundreds or even thousands of small charges near-simultaneously from publicly available businesses like yours. 

If the card testing works, the AI compiles a list of valid payment options the fraudster then uses for criminal purposes. Even if the testing alerts a cardholder, your company is likely to pay processing and chargeback fees for any resulting dispute. Your business may well lose the respect of that customer. 

Successful AI card testing fraud results in:

  • Loss of revenue: Even small transactions can add up alongside relevant fees.

  • Weaknesses in security: AI machine learning discovers your site has clear vulnerabilities.

  • Loss of time: Dealing with small transactions that turn out to be fraud can be very labor intensive.

  • Potential high-risk classification: Banks and card brands may increase fees, decline future transactions, or require chargeback monitoring programs.

  • Possible outsized losses: Now that the fraudster understands the vulnerability, they may go for higher-ticket items and recognize your business as a source of easily resold goods.

Preventing Card Testing Fraud

The worst part is that many of these attacks are addressable through modern fraud solutions. Modern fraud detection systems provide a bevy of options that protect your business. Malicious artificial intelligence may try to spoof browser validation or fail to recognize proxy and VPN connections. The more security you have in place, the more difficult it becomes for criminals, so:

  • Deploy relevant security checks: Require CVV codes and Address Verification Service checks.

  • Add a Captcha and botnet prevention: These tools can identify and stop many common bots.

  • Limit guest checkout and checkout attempts: Gather more data and thwart brute-force attacks.

These may seem like simple fixes, and they should be part of any card testing fraud solution. However, fraud prevention AI truly shines when it comes to fending off these types of large-scale attacks. And it does it using a form of intuition.

Artificial Intelligence Facilitates New Types of Fraudulent Activity 

The rapid and widespread adoption of generative AI puts new tools in the hands of fraudsters. Criminals have developed new and innovative ways to use this technology, transforming the work of fraud analysts and putting additional pressure on legitimate businesses like yours. We’re in a new future where artificial intelligence allows criminals to:

  • Fake voices to deliver believable telephone calls and texts using bots

  • Generate realistic images of fake receipts or shipping confirmation

  • Create videos to support phishing scams and social engineering

  • Use unsecured chatbots to gather account information

Each of these represents a real threat to security, which demands comprehensive protection. New types of fraud continue to emerge as AI becomes more commonplace. The goal of these criminals remains the same — to steal revenue and goods from your business or customers — but we now face constantly evolving threats that grow with each iteration.

Example: Synthetic Identities

Criminals don’t just attempt to mask who they are or impersonate real customers, fraudsters use artificial intelligence to create entirely new individuals. These synthetic identities draw from data on real people, most of which may be publicly available through social media and other obvious online engagements.

Fraudsters use real photos of people to generate highly realistic fake portraits. Addresses and phone numbers all check out to real residential locations. These synthetic identities could even have their own social media accounts and friend or follower networks. Each level of detail added to a synthetic person becomes a new challenge for business owners and fraud solutions providers working to keep criminals at bay.

While fake accounts are nothing new, the level of sophistication now reaches previously unseen levels. And the potential for loss is staggering. An analysis by Equifax of the customer profiles of one credit issuer revealed more than 62,000 accounts tied to synthetic identities, driving over $8 million in losses each year. Prevention tools that don’t include AI or machine learning struggle to deal with this type of situation.


Want to stop fraudsters in their tracks? Discover data-driven identity and fraud solutions today.

Brady Harrison

Brady Harrison

Head of Strategy & Execution, Identity & Fraud Services

Brady Harrison is the Head of Strategy & Execution at Equifax Identity & Fraud Services where he leads data-driven initiatives to combat fraud and optimize customer results. He leverages deep expertise in financial technology, fraud detection, and data visualization. Brady focuses on the strategic view of the business [...]