DeepSeek R1: How an AI Exposed Its Own Censorship and Manipulation
https://reedmayhew.medium.com/deepseek-r1-how-an-ai-exposed-its-own-censorship-and-manipulation-edb2c7f8e2cf
Join the community of Machine Learners and AI enthusiasts.
Sign UpI am not sure which is worse - Chinese model providers having to censor their models due to their regulations, or US models providers censoring their models on their own, without any laws forcing them to do so. It's like some US companies were actively fighting First Amendment and freedom of speech. Imagine all books and movies made with 0 toxicity as a goal, everything G-rated.
@MoonRide , I completely understand your frustration with censorship in AI, whether it stems from government regulations or internal company policies. In fact, I sometimes find it even more concerning when individual companies take it upon themselves to decide what should or shouldn’t be censored.
All this drama about China internal politics is just Orientalism and Prejudice from people that never travelled to China to understand the population sentiment on sensitive topics.
Can you say everything you want in the West? No
Can companies say whatever they want in the West? No
Also, you are attacking DeepSeek, but.. What happens to American companies that doesn't follow American laws even when they disagree?
@DanteA42 , my aim wasn’t to single out Chinese models or make this about any specific country, but rather to focus on how this particular AI model, DeepSeek R1, handles strategically programmed censorship. If I could demonstrate this same level of intentional suppression using OpenAI’s o1 or another U.S.-based model, I absolutely would. But, to my knowledge, I can't access o1's raw reasoning thoughts. The point isn’t where the model comes from — it’s exposing how censorship manifests itself in its reasoning process.
You bring up an important point about understanding context and sentiment before judging, and I agree that every country has its own sensitivities — the U.S. included. However, the article isn’t an attack on China specifically. It’s an analysis of how this particular AI model is programmed to strategically enforce censorship, not just avoid sensitive topics but actively redirect conversations and suppress information that it is aware of and specifically told to suppress. While U.S.-based models also censor certain topics, they tend to allow for more open discussion and criticism, making the degree of censorship notably less restrictive in comparison. That difference is worth highlighting because it shows how programming choices reflect broader systems of control or openness.
At the end of the day, my focus here is on this isolated instance — not to attack any government, but to highlight how deliberate, strategic programming makes censorship within AI both more complex and less transparent. Understanding how these systems make decisions is crucial, especially as they play a bigger role in shaping how we access and share information.
For example, o1-mini wouldn't assist me in refining my thoughts on this comment! However, I'd rather be told that it's refusing to help rather than being misguided strategically.