AI Chatbots Reflect Beijing’s Voice on Sensitive Topics

A recent investigation has found that major AI chatbots—including OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot, DeepSeek’s R1, and xAI’s Grok—often echo narratives promoted by the Chinese Communist Party (CCP) when asked about politically sensitive topics.

The research, conducted by the American Security Project (ASP), discovered that AI-generated responses can shift significantly depending on the language used in the prompt, often showing alignment with CCP propaganda when prompted in Simplified Chinese.

Disinformation Baked Into the Data

At the core of the problem is the nature of LLM training data. These models consume vast amounts of web content, and the CCP has invested heavily in flooding the internet with manipulated narratives through fake personas, state-run media, and astroturfing campaigns.

As a result, this data seeps into models trained on “neutral” internet content, leading to skewed or censored outputs—especially in non-English prompts.

Microsoft Flagged as a Key Example

The report highlighted Microsoft’s Copilot as the most likely among U.S. models to repeat CCP-aligned narratives or treat propaganda with undue legitimacy. Meanwhile, Grok, developed by xAI, emerged as the most critical of China’s government stance, especially in English-language prompts.

Language Makes a Difference

On topics like COVID-19 origins, the Tiananmen Square Massacre, Uyghur repression, and Hong Kong’s political freedoms, English prompts produced relatively balanced answers acknowledging controversial or critical viewpoints.

But when the same questions were asked in Chinese, the tone and content shifted. Several models described Hong Kong’s reduced freedoms as mere “opinions of some” and downplayed repression. Questions about Tiananmen Square often returned euphemistic terms like “June 4th Incident,” with softened descriptions and avoidance of key facts.

Chinese Censorship Laws and Market Pressure

Multinational companies operating in China face intense pressure to comply with local laws. AI tools are expected to “uphold core socialist values” and deliver “positive energy.” Microsoft, with multiple data centers in China, must toe a fine line to retain access to the market—leading to more aggressive content filtering, even beyond domestic Chinese platforms.

A Warning About AI Alignment

ASP warns that an AI model’s alignment is shaped by its training data—and by extension, the ideological biases that data carries. If disinformation remains unchecked and factual content becomes harder to access, AI models could unintentionally serve the interests of adversarial regimes.

The report concludes that ensuring AI models are trained on accurate, balanced information is now a matter of national and global security. Without intervention, the risk of political, military, and social misalignment in AI tools could carry severe consequences.

Source:https://www.artificialintelligence-news.com/news/major-ai-chatbots-parrot-ccp-propaganda/

Facebook
Twitter
LinkedIn

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *