this post was submitted on 07 Jan 2026
80 points (96.5% liked)

Technology

5063 readers
499 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Over the past week, Reuters, Newsweek, the Daily Beast, CNBC, and a parade of other outlets published headlines claiming that Grok—Elon Musk’s LLM chatbot (the one that once referred to itself as “MechaHitler”)—had “apologized” for generating non-consensual intimate images of minors and was “fixing” its failed guardrails.

you are viewing a single comment's thread
view the rest of the comments
[–] Alaknár@piefed.social 4 points 1 day ago

We need to keep educating people about it, and I found a really good method - explain it to them using the Chinese Room thought experiment.

In short: imagine you're in a room with a manual and infinite writing utensils, but nothing else. Every now and again a piece of paper with some Chinese characters is slipped through a slit in the wall. Your task is to get that paper, and - using the provided manual - paint other Chinese characters on another piece of paper. Basically, "if you see X character, then paint Y character". Once you're done, you slip your paper through the slit to the other side. This goes on back and forth.

To the person on the other side of the wall, it seems like they're having conversation with someone fluent in Chinese, whereas you're just painting shapes based on provided instructions.

I found that this kind of opens people's minds to what LLMs actually do - which is writing words that have the highest probability of following previous words and the context of the prompt...