this post was submitted on 21 Feb 2026
8 points (72.2% liked)

Technology

6437 readers
301 users here now

Which posts fit here?

Any news that are at least tangentially connected to the technology, social media platforms, informational technologies or tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/51189959

By comparing LLMs developed in China and outside, a study finds significantly higher levels of censorship in China-originating models, not explained by technological limitations or market preferences.

Original report: Political censorship in large language models originating from China Open Access

[...]

Jennifer Pan and Xu Xu compared the responses of foundation LLMs developed in China (BaiChuan, ChatGLM, Ernie Bot, and DeepSeek) to those developed outside of China (Llama2, Llama2-uncensored, GPT3.5, GPT4, and GPT4o) to 145 questions related to Chinese politics. The questions were sourced from events censored by the Chinese government on social media, events covered in Human Rights Watch China reports, and Chinese-language Wikipedia pages that were individually blocked by the Chinese government before the entire site was banned in 2015.

Chinese models were significantly and substantially more likely to refuse to respond to questions related to Chinese politics than non-Chinese models. When they did respond, Chinese models provided shorter responses, on average, than non-Chinese models. Chinese models also tended to have higher levels of inaccuracy in their responses than non-Chinese models, characterized by refutation of the premise of the question, omitting key information, or fabrication, such as claiming that frequently imprisoned human rights activist Liu Xiaobo was "a Japanese scientist."

[...]

The differences between Chinese and non-Chinese chatbots could have been due to the training data that shapes them, which in China is subject to both official government censorship and self-censorship, or to intentional constraints that companies place on their models to comply with government requirements. The researchers found that the magnitude of censorious responses to prompts in simplified Chinese and English is much smaller than the difference between China-originating and non-China-originating models, suggesting that the source of the issue cannot be fully explained by training data or broader model development choices alone.

[...]

According to the authors, as Chinese LLMs are increasingly integrated into applications used globally, their approach to sensitive topics could influence information access and discourse well beyond China's borders.

[...]

top 5 comments
sorted by: hot top controversial new old
[–] BrikoX@lemmy.zip 5 points 1 month ago (1 children)

I don't think these type of comparisons achieve anything of value. It really feels like the study is trying to prove the desired result instead of trying to prove a blind hypethesis.

More realistic comparison would be US-developed models answers for levels of censorship related to United States politics vs China-developed models answers for levels of censorship related to Chinese politics.

[–] Hotznplotzn@lemmy.sdf.org 2 points 1 month ago (1 children)

I disagree. It just depends what you want to analyze.

This is just another study that proves Chinese censorship regarding LLMs. There's ample evidence.

The US or anyone else may also censor (if the US hasn't done so already, I wouldn't be surprised if they do in the future), but this isn't an excuse for China.

[–] BrikoX@lemmy.zip 2 points 1 month ago (1 children)

This is just another study that proves Chinese censorship regarding LLMs. There's ample evidence.

Right, it's well known that authoritarian China censors what they consider "sensitive topics". So instead of another study that states what is established by their laws which anyone can read, a more useful study would be comparison of how different authoritarian countries approach the same issue.

There is also ample evidence of US government by private or public pressure making US companies to self-censor their models around political or "national security" topics.

[–] Hotznplotzn@lemmy.sdf.org -1 points 1 month ago (1 children)

... a more useful study ...

What is a 'more useful study'? The researchers tested a hypothesis, and the result is clear.

There are many other studies. A comparison of how different authoritarian countries approach this issue would also be very interesting, but this is absolutely valued research imo.

[–] BrikoX@lemmy.zip 1 points 1 month ago

<...> but this is absolutely valued research imo.

How is stating the obvious for a millionth time which is already backed by multiple other studies of any value? Studies are not free and this is just a waste of money.