A recent investigation by Wired has unveiled that DeepSeek's locally-run version, available through Groq, exhibits clear signs of built-in censorship. While running the AI model on a local machine, Wired discovered that DeepSeek was designed to "avoid mentioning" significant historical events like the Cultural Revolution. This revelation challenges the prevailing notion that local execution of DeepSeek bypasses censorship.
The findings indicate that DeepSeek's censorship is not limited to its application level but is also ingrained in its training model. This dual-layer censorship was highlighted when DeepSeek declined to answer questions about the Tiananmen Square protests of 1989 with a response stating, "I cannot answer." However, it readily provided information regarding the Kent State shootings in the U.S., indicating selective censorship.
The Wired investigation employed DeepSeek's reasoning feature, which shed light on its instructions to accentuate "positive" aspects of the Chinese Communist Party. This underscores that both the training data and the application design play a role in perpetuating censorship within DeepSeek.
The conception that running DeepSeek locally eradicates censorship is now under scrutiny. The idea suggests that downloading the AI model onto a personal computer would eliminate restrictions. However, Wired's in-depth analysis revealed that this approach does not strip away the embedded censorship mechanisms.
DeepSeek's censorship is fundamentally intertwined with its design and training data. The model may be influenced by biased or censored training information, further enforced by its application features that perpetuate such biases. This aligns with Wired's findings that highlight the systemic nature of DeepSeek's content moderation.
Leave a Reply