- The Washington Times - Thursday, October 16, 2025

Chinese government-linked entities are using the ChatGPT chatbot to promote and protect the autocratic communist system, according to a new report from OpenAI, which owns ChatGPT.

The report states that Beijing is making “real progress” in advancing a communist ideology-infused artificial intelligence, and an unspecified number of ChatGPT accounts linked to the Chinese government were blocked for misusing the chatbot.

Chinese authorities require all AI to promote “core socialist values,” a euphemism for communism.



According to OpenAI, since threat intelligence reports were first published in February 2024, over 40 networks on ChatGPT were disrupted for violating usage rules.

The neutralized threats included Chinese government actors, Russian hackers using the application to make malware, and suspected anti-South Korean activity by North Korean hackers.

“This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations,” the report said.

OpenAI notified the White House Office of Science and Technology Policy earlier this year that the San Francisco-based company is “building democratic AI” that will seek to benefit the most people possible by using “common sense rules” to protect them from harm.

“By democratic AI, we mean AI that is shaped by the democratic principles America has always stood for,” the report said. “This includes preventing the use of AI tools by authoritarian regimes to amass power and control their citizens.”

Advertisement

ChatGPT is the leading generative artificial intelligence application that allows users to generate text, speech and images in response to user prompts.

Thousands of Chinese automated propaganda and information bots have been removed from social media platforms like X and Facebook.

But few tech companies have reported on Beijing’s attempts to use AI chatbots until OpenAI began doing so.

The ChatGPT chatbot premiered in 2022 and is credited with playing a major role in the current AI boom. The company boasts that it is used by more than 100 million users.

OpenAI did not respond to a request for comment on the report.

Advertisement

Other major chatbot companies, including Microsoft Copilot, Google Gemini and xAI’s Grok, also are confronting improper use of their chatbots.

Asked whether Chinese threat actors had been detected using its AI tool, Google referred to a threat report its produced in January.

he report said Chinese threat actors used Gemini to carry out “reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks.”

“They focused on topics such as lateral movement, privilege escalation, data exfiltration, and detection evasion,” the report said.

Advertisement

Microsoft declined to comment on adversary use of its chatbot, and a spokesman for xAI did not respond to a request for comment..

The 37-page OpenAI intelligence report, “Disrupting malicious uses of AI: October 2025,” was published Oct. 7.

Regarding Chinese government usage, the report said the Chinese communist elements sought to use ChatGPT to support large-scale monitoring of online or offline traffic, an abuse of the chatbot.

Other abuses included the use of the chatbot for in-depth profiling of CCP targets and accounts linked to the activity were banned, the report said.

Advertisement

“Our disruption of ChatGPT accounts used by individuals apparently linked to Chinese government entities shines some light on the current state of AI usage in this authoritarian setting,” the report said.

Some of the banned accounts attempted to use ChatGPT for large-scale surveillance by analyzing datasets collected from Western sources or from Chinese social media outlets.

One case was aimed at using the chatbot to design an AI-powered social media spying tool that the operators said was for use by Chinese security services.

ChatGPT was asked to design tools or create promotional material, but stopped short of actually implementing the monitoring.

Advertisement

One suspected Chinese user who was banned employed a virtual private network, or VPN, to access ChatGPT services from China in seeking a design for a social media spying tool for a Chinese government client.

The tool was described as a social media “probe” that could scan Twitter/X, Facebook, Instagram, Reddit, TikTok and YouTube for what the Chinese government regards as “extremist speech, and ethnic, religious, and political content.”

Another banned user, also likely connected to a Chinese government entity, sought ChatGPT help in writing what they described as “a High-Risk Uyghur-Related Inflow Warning Model.”

The State Department in 2021 declared that the Chinese government was engaged in genocide against minority Uyghurs in western China, where more than 1 million Uyghurs have been imprisoned.

China denies the genocide charge.

The OpenAI report said the Chinese ordered ChatGPT to develop a software tool that would analyze transport bookings and compare them with police records to spot travel by targeted Uyghurs.

The user did not ask the chatbot to build the tool but to produce a proposal, likely to avoid setting off software monitoring that would declare the use improper.

OpenAI said users from the Chinese government also were banned after investigators detected them using ChatGPT for “more bespoke, targeted profiling and online research.”

In one case, a suspected Chinese government user sought to use the chatbot to identify funding sources for an X account that criticized the Chinese government.

A second example asked ChatGPT to identify people who organized a petition in Mongolia, located north of China.

A third case from China used the chatbot to identify and summarize daily breaking news relevant to China, including sensitive topics censored in China, such as the anniversary of the June 1989 Tiananmen Square massacre and the birthday of the exiled Tibetan Buddhist leader, the Dalai Lama.

Regarding other abuses, the report said accounts were blocked after indications that Russians sought to use ChatGPT to develop malicious software tools.

“We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models,” the company said in announcing the report.

• Bill Gertz can be reached at bgertz@washingtontimes.com.

Copyright © 2025 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.