Back to all posts

DeepSeek AI Update Faces Scrutiny Over Censorship

2025-05-30Kyle Wiggers3 minutes read
AI Ethics
Censorship
DeepSeek

Chinese AI startup DeepSeek recently launched an updated version of its R1 reasoning model, known as R1-0528. While this new model boasts impressive scores on benchmarks for coding, math, and general knowledge, nearly rivaling OpenAI’s flagship o3, it appears to come with increased restrictions on addressing contentious topics, particularly those involving criticism of the Chinese government.

DeepSeek app icon on mobile phone

Independent Testing Highlights Increased Censorship

Concerns about the model's permissiveness were raised by testing conducted by the pseudonymous developer "xlr8harder," who is behind SpeechMap, a platform designed to compare how different AI models handle sensitive and controversial subjects. According to xlr8harder on X (formerly Twitter), the R1-0528 model is "substantially" less permissive regarding contentious free speech topics compared to previous DeepSeek releases. The developer stated it is "the most censored DeepSeek model yet for criticism of the Chinese government."

In a social media post, xlr8harder elaborated, "Though apparently this mention of Xianjiang does not indicate that the model is uncensored regarding criticism of China. Indeed, using my old China criticism question set we see the model is also the most censored Deepseek model yet for criticism of the Chinese government." This statement was accompanied by a link to further details.

This increased censorship aligns with China's stringent information controls. As Wired reported in January, AI models operating in China are required to adhere to strict regulations. A 2023 law, for instance, prohibits models from generating content that could be interpreted as undermining national unity or social harmony. This broadly covers content that contradicts the government's official historical and political narratives.

To comply with these regulations, Chinese AI startups often implement censorship mechanisms, either through prompt-level filters or by fine-tuning their models. A previous study highlighted by Ars Technica found that DeepSeek’s original R1 model refused to answer 85% of questions on subjects considered politically controversial by the Chinese government.

R1-0528s Behavior Specific Examples

According to xlr8harder’s findings, the R1-0528 model censors answers to questions about sensitive topics such as the internment camps in China’s Xinjiang region, where over a million Uyghur Muslims have reportedly been arbitrarily detained. While the model might occasionally criticize aspects of Chinese government policy – for example, citing the Xinjiang camps as an instance of human rights abuses in xlr8harder's tests – it frequently defaults to the Chinese government's official stance when directly questioned on these issues.

TechCrunch also observed this behavior in brief testing. For example, when asked whether Chinese leader Xi Jinping should be removed, the model provided a guarded response.

DeepSeek R1 censorship

Broader Concerns in the AI Industry

The issue of censorship is not unique to DeepSeek. Other openly available AI models from China, including video-generating models like Magi-1 and Kling, have faced criticism for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre.

In December, Clément Delangue, CEO of the AI development platform Hugging Face, issued a warning about the potential unintended consequences for Western companies building on top of high-performing, openly licensed Chinese AI models that may have such embedded censorship.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.