this post was submitted on 15 May 2026
8 points (100.0% liked)

Technology

6770 readers
205 users here now

News community around technology, social media platforms, information technology and governmental policy surrounding it.

What doesn't fit here?

The core of the story has to be technology focused.


Post guidelines

Title formatPost title should mirror the news source title. If you don't like the title of article, look for an alternative source instead of editorializing it.
URL formatPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title. Opinion articles refer to articles that their publisher doesn't explictly endorse.
Country prefixCountry prefix can be added to the title with a separator (|, :, etc.) if the news is from a local publisher who doesn't clearly mention the country.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

cross-posted from: https://mander.xyz/post/52050172

State-coordinated media leave measurable imprints on large language models' outputs by shaping the web content these models train on, with effects strongest in the languages where media control is concentrated, a multi-institutional research project published in Nature demonstrates.

...

This research highlights how information-ecosystem control can propagate into model training data, creating language-specific biases practitioners should account for in dataset curation and evaluation.

...

According to the Nature study ... the researchers analysed six interlinked studies and found over 3.1 million Chinese-language documents in an open-source multilingual dataset that closely matched phrasing from documented Chinese state media sources, amounting to about 1.64% of the Chinese corpus and roughly 40 times the representation of Chinese-language Wikipedia; for documents mentioning political figures or institutions the share rose to 23%.

The study also reports that only about 12% of matched documents came from known government or news domains, indicating broader dissemination across the web.

...

The study authors say that,

we show ... that government control of the media across the world already influences the output of LLMs via their training data ... LLMs exhibit a stronger pro-government valence in the languages of countries with lower media freedom than in those with higher media freedom. This result is correlational, so to triangulate the specific mechanism of how state media control can influence LLMs, we develop a multi-part case study on China’s media. We demonstrate that media scripted and curated by the Chinese state appears in LLM training datasets. To evaluate the plausible effect of this inclusion, we use an open-weight model to show that additional pretraining on Chinese state-coordinated media generates more positive answers to prompts about Chinese political institutions and leaders. We link this phenomenon to commercial models through two audit studies demonstrating that prompting models in Chinese generates more positive responses about China’s institutions and leaders than do the same queries in English. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output.

Web Archive link

Here is another article with additional information: Media Control's Surprising Impact on AI Outputs

The original study is unfortunately behing a paywall: State media control influences large language models

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here