this post was submitted on 07 Jan 2026
79 points (96.5% liked)

Technology

5054 readers
440 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Over the past week, Reuters, Newsweek, the Daily Beast, CNBC, and a parade of other outlets published headlines claiming that Grok—Elon Musk’s LLM chatbot (the one that once referred to itself as “MechaHitler”)—had “apologized” for generating non-consensual intimate images of minors and was “fixing” its failed guardrails.

top 4 comments
sorted by: hot top controversial new old
[–] Technus@lemmy.zip 23 points 1 day ago (2 children)

I fucking hate when people ask an LLM "what were you thinking" because the answer is meaningless, and it just showcases how little people understand of how they actually work.

Any activity inside the model that could be considered any remote approximation of "thought" is completely lost as soon as it outputs a token. The only memory it has is the context window, the past history of inputs and outputs.

All it's going to do when you ask it that, is it's looking over the past output and attempting to rationalize what it previously output.

And actually, even that is excessively anthropomorphizing the model. In reality, it's just generating a plausible response to the question "what were you thinking", given the history of the conversation.

I fucking hate this version of "AI". I hate how it's advertised. I hate the managers and executives drinking the Kool-Aid. I hate that so much of the economy is tied up in it. I hate that it has the energy demand and carbon footprint of a small nation-state. It's absolute insanity.

[–] Alaknár@piefed.social 4 points 1 day ago

We need to keep educating people about it, and I found a really good method - explain it to them using the Chinese Room thought experiment.

In short: imagine you're in a room with a manual and infinite writing utensils, but nothing else. Every now and again a piece of paper with some Chinese characters is slipped through a slit in the wall. Your task is to get that paper, and - using the provided manual - paint other Chinese characters on another piece of paper. Basically, "if you see X character, then paint Y character". Once you're done, you slip your paper through the slit to the other side. This goes on back and forth.

To the person on the other side of the wall, it seems like they're having conversation with someone fluent in Chinese, whereas you're just painting shapes based on provided instructions.

I found that this kind of opens people's minds to what LLMs actually do - which is writing words that have the highest probability of following previous words and the context of the prompt...

[–] Perspectivist@feddit.uk 0 points 22 hours ago* (last edited 22 hours ago)

I hate how it’s advertised

Where are you guys seeing ads for AI? I've literally seen zero ever unless we start redefining news articles as ads which still wouldn't make any sense as 99% of them that I come across are critical of AI rather than praising it.

[–] riskable@programming.dev 1 points 1 day ago

Good catch!