That attractive, charming person on a video-chat dating website and that attentive, sympathetic caller on a dating phone line are likely scammers employing the latest artificial intelligence features to separate lonely romantics from their money, cybersecurity experts say.
Hyper-realistic artificial pictures, voice clones and deepfake video calls are driving a surge in online romance scams, experts say.
Officials at LexisNexis Risk Solutions’ Government Group, which works with public agencies to combat digital frauds, estimate that Americans lost more than $3 billion to romance scams last year, up from $1.2 billion in 2024 and $250 million in 2023.
“AI has taken the traditional dating scam and put it on steroids,” said Haywood Talcove, CEO of LexisNexis Risk Solutions Government. “Criminals can now generate realistic photos, polished profiles and convincing conversations at scale.”
Traditionally, online “catfishing” schemes used photos stolen from other people’s social media channels to create fake dating profiles. Scammers wrote messages and made phone calls from other countries, requesting money they claimed was necessary to meet their victims in person.
Eagle-eyed daters typically caught these fraudsters through Google Images searches by refusing to engage in video chats — and by the scammers’ incongruous foreign accents.
Dan Ye, a Maryland-based AI expert who lectures at Johns Hopkins University, said live face-swapping and other technologies have eliminated these traditional signs of offshore fakery.
“AI profiles now speak with perfect grammar, slang, and cultural nuances,” Mr. Ye said in an email. “Reverse image searches are increasingly useless because the photos are AI-generated originals, not stolen from Instagram.”
According to Trustpilot user reviews, up to 80% of the profiles on major dating websites such as Hinge and Bumble are now fake or AI-generated. That’s up from 10% to 15% of total profiles before the AI boom in 2023.
Experts insist the number of fake profiles and financial losses from AI catfishing is likely even higher. But they say most victims feel either too embarrassed or too uncertain to report them.
“It’s difficult to pinpoint exact numbers of AI dating profiles because some may be only partially AI-generated, and it’s hard to tell the difference at times because the technology has gotten so good so quickly,” said Amber Brooks, Florida-based editor of DatingAdvice.com.
She noted that nearly half of the youngest singles now use AI to spell check their profiles, airbrush their photos, and “talk themselves up.”
Paying the price
The computer security company McAfee reported last month that losses in online romance cons ranged from $500 for the youngest adults to $5,000 and more for middle-aged victims.
Norton, a prominent digital security and antivirus company, reported that nearly half of online daters admitted this year to being targeted by a dating scam. Of them, 74% fell victim to one.
Norton reported blocking more than 17 million dating scam attacks during the last three months of 2025, an increase of 19% from the year before. The company also noted that only 46% of people correctly identified AI-generated photos in a test.
Conscious of these increasing challenges, some prominent dating websites have bolstered their vetting processes in recent years. For example, Tinder recently mandated facial verification, and Match.com now requires a phone number and email for confirmation to weed out bots and duplicate profiles.
Dave Auman, the chief technology and safety officer at MyTruDate, a Florida-based dating platform that requires a government-issued ID for biometric screening and a background check, said that such measures still fail to catch many AI-generated scams.
For example, he noted that dedicated fraudsters can evade verification processes and use ChatGPT to craft text messages that fool 70% of people into thinking they’re real.
“Scammers have moved away from stereotypical too-good-to-be-true personas and now favor ‘strategic imperfection’ to enhance credibility,” Mr. Auman said. “AI bots can manage dozens of simultaneous conversations 24/7 without fatigue.”
Surviving fraud
Most people who lose money to online romance scams never get a refund.
Cybersecurity experts insist the best way to avoid AI fraud is to return to old-school dating practices, watching for naturalistic behavior and meeting in person before sharing personal information.
“Insist on a spontaneous video call,” said Amber Lee, a veteran matchmaker who helps working professionals meet people. “Scammers often avoid unscripted interaction.”
She said it can also help to look up the “online footprint” of people in dating profiles on LinkedIn, social media, and professional websites to see if they exist outside of their profile.
Key red flags of AI scams include online daters who resist requests to meet in person, respond to messages 24/7, share no candid or casual photographs, and offer empathetic words while avoiding specific questions about themselves.
Other warning signs include requests for money, cryptocurrency investments, financial information, and early attempts to move the conversation to an encrypted messaging platform such as WhatsApp.
Cybersafety experts urge people who think they’ve encountered an AI romance scam to cease communications, report it to the dating website, and file complaints with the FBI and the Federal Trade Commission.
“If you realize you’ve been communicating with a scammer, stop contact immediately and report it,” Ms. Lee said. “There’s no shame in it, and you want to be proactive in protecting other potential victims.”
• Sean Salai can be reached at ssalai@washingtontimes.com.

Please read our comment policy before commenting.