- The Washington Times - Thursday, December 11, 2025

Scammers are adept at using artificial intelligence to fleece people, from fake phone calls that mimic a grandparent’s voice to emails that replicate official requests from the Department of Veterans Affairs.

Sens. Josh Hawley, Missouri Republican, and Maggie Hassan, New Hampshire Democrat, want to know what the top AI firms plan to do about it.

The senators fired off a letter to the six companies — OpenAI, Anthropic, Meta, Google, Microsoft, xAI and Perplexity AI — on Thursday, demanding details of their anti-scam measures.



Lawmakers say AI-driven scams are fueling criminal proceeds. The FBI reported $16.6 billion in reported losses from suspected scams and cybercrimes in 2024, a 370% increase from $3.5 billion in 2019.

“The federal government has a responsibility to protect the American people from scams, but this effort requires an all-hands-on-deck approach across multiple industries,” the senators wrote.

Their concern about scams reflects growing tension over the society-changing impact of AI, a sector attracting billions of dollars in investment and growing at breakneck speed.


SEE ALSO: AI fuels surge in digital holiday scams


While firms salivate over rapid productivity growth, lawmakers are worried about sophisticated schemes in which scammers dispatch convincing emails, texts, and phone calls at an “industrial scale.”

They pointed to a New York man who was sent to prison this year for a “grandparent scam” in which he stole around $20,000 from three New Hampshire families. The man used AI-generated voice clones to convince them that their loved ones were in trouble.

Advertisement
Advertisement

Lawmakers also pointed to email scams in which users cannot tell if a company or government email is genuine or fraudulent. For instance, the VA warned seniors not to fall for email scams that request personal or financial information.

The Social Security Administration Office of the Inspector General issued similar warnings.

“With advancements in AI, scams will continue to grow in sophistication, frequency, and impact. In the early phases of a scam, criminals can use generative AI to quickly identify and then collect details on their targets, enabling them to create tailor-made scams,” the senators wrote.

Lawmakers praised OpenAI, for example, for prohibiting the use of its programs to generate scams.

“AI companies, however, have reportedly faced challenges in preventing the misuse of their technology,” they said, adding that scammers use clever phrasing to trick AI programs into performing forbidden tasks.

Advertisement
Advertisement

The senators’ letter does not accuse any individual company of violating the law or doing something explicitly wrong. However, it demands answers by Jan. 14 to 16 questions about anti-scam measures, how it prevents the collection of personally identifiable data in its training models, and cases in which the company had to coordinate with law enforcement on fraud cases.

• Tom Howell Jr. can be reached at thowell@washingtontimes.com.

Copyright © 2025 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.