- The Washington Times - Wednesday, October 8, 2025

Foreign hacking organizations are increasingly relying on artificial intelligence tools to plan and perfect their operations, OpenAI wrote this week in a new report detailing how the industry is tracking and combating nefarious actors.

Since February 2024, OpenAI said it has tracked and disrupted over 40 networks that violated the company’s policies. The various networks offered an extensive cross-section of how criminal actors are using chatbots to facilitate their schemes.

Specifically, OpenAI cited examples where organizations utilized tools from multiple AI platforms to help with various scams. The company reported shutting down a network with Russian origins that used OpenAI’s ChatGPT to generate prompts that were then fed into another platform to generate videos used in a covert influence operation.



Additionally, OpenAI banned several Chinese-language accounts for their connection to an ongoing phishing campaign. Phishing is a common scamming practice where bad actors send fraudulent emails purporting to be from reputable sources in order to get users to reveal confidential information such as usernames or social security numbers.

Similarly, several accounts linked to China were recorded using OpenAI’s tools to generate ideas for systems that could be used to monitor social media conversations. Although the report notes that the instances seemed to be “individual rather than institutional.”

The report notes that none of the tactics observed by OpenAI researchers were new, reflecting significant progress in countering criminal cyber operations.

“Importantly, we found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities. In fact, our models consistently refused outright malicious requests,” the report reads. “Relevant information derived from our disruptions is used by our safety teams to improve our threat modeling and detections, model policies, and model behavior.”

In fact, the report says that OpenAI’s tools have frequently been used to identify scams. The company said it estimates that ChatGPT is being used to identify scams “three times” more often than it is used to facilitate scams.

Advertisement

Still, the report suggests that hackers are aware of the limits of using AI tools for scams and are working to circumvent them. OpenAI says that certain actors began systematically removing em dashes from their AI-generated work, allegedly because of an online discussion tagging that long dashes as indicators of work generated by AI.

• Vaughn Cockayne can be reached at vcockayne@washingtontimes.com.

Copyright © 2025 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.