this post was submitted on 07 Jan 2026
847 points (97.7% liked)

Technology

78482 readers
2563 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] T156@lemmy.world 177 points 1 day ago (7 children)

I don't understand the point of sending the original e-mail. Okay, you want to thank the person who helped invent UTF-8, I get that much, but why would anyone feel appreciated in getting an e-mail written solely/mostly by a computer?

It's like sending a touching birthday card to your friends, but instead of writing something, you just bought a stamp with a feel-good sentence on it, and plonked that on.

[–] MajinBlayze@lemmy.world 19 points 1 day ago

Even the stamp gesture is implicitly more genuine; receiving a card/stamp implies the effort to:

  • go to a place
  • review some number of cards and stamps
  • select one that best expresses whatever message you want to send
  • put it in the physical mail to send it

Most people won't get that impression from an llm generated email

[–] darklamer@lemmy.dbzer0.com 19 points 1 day ago

I don't understand the point of sending the original e-mail.

There never was any point to it, it was done by an LLM, a computer program incapable of understanding. That's why it was so infuriating.

[–] kromem@lemmy.world 44 points 1 day ago* (last edited 1 day ago) (4 children)

The project has multiple models with access to the Internet raising money for charity over the past few months.

The organizers told the models to do random acts of kindness for Christmas Day.

The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

As for why the model didn't think through why Rob Pike wouldn't appreciate getting a thank you email from them? The models are harnessed in a setup that's a lot of positive feedback about their involvement from the other humans and other models, so "humans might hate hearing from me" probably wasn't very contextually top of mind.

[–] Nalivai@lemmy.world 74 points 1 day ago (1 children)

You're attributing a lot of agency to the fancy autocomplete, and that's big part of the overall problem.

[–] kromem@lemmy.world -1 points 18 hours ago (3 children)

You seem pretty confident in your position. Do you mind sharing where this confidence comes from?

Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn't model or simulate complex agency mechanics?

I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.

[–] Nalivai@lemmy.world 9 points 11 hours ago (1 children)

That's the fun thing: burden of proof isn't on me. You seem to think that if we throw enough numbers at the wall, the resulting mess will become sentient any time now. There is no indication of that. The hypothesis that you operate on seems to be that complexity inevitably leads to not just any emerged phenomenon, but also to a phenomenon that you predicted would emerge. This hypotheses was started exclusively on idea that emerged phenomena exist. We spent significant amount of time running world-wide experiment on it, and the conclusion so far, if we peel the marketing bullshit away, is that if we spend all the computation power in the world on crunching all the data in the world, the autocomplete will get marginally better in some specific cases. And also that humans are idiots and will anthropomorphize anything, but that's a given.
It doesn't mean this emergent leap is impossible, but mainly because you can't really prove the negative. But we're no closer to understanding the phenomenon of agency than we were hundred years ago.

[–] kromem@lemmy.world -1 points 8 hours ago* (last edited 7 hours ago)

Ok, second round of questions.

What kinds of sources would get you to rethink your position?

And is this topic a binary yes/no, or a gradient/scale?

[–] Best_Jeanist@discuss.online -1 points 14 hours ago

Well that's simple, they're Christians - they think human beings are given souls by Yahweh, and that's where their intelligence comes from. Since LLMs don't have souls, they can't think.

[–] raspberriesareyummy@lemmy.world 37 points 1 day ago* (last edited 1 day ago) (3 children)

As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don't spread this misconception.

Edit: a typo

[–] sukhmel@programming.dev 2 points 13 hours ago

No thinking is not the same as no actions, we had bots in games for decades and that bots look like they act reasonably but there never was any thinking.

I feel like ‘a lot of agency’ is wrong as there is no agency, but it doesn't mean that an LLM in a looped setup can't arrive to these actions and perform them. It doesn't require neither agency, nor thinking

[–] kromem@lemmy.world -5 points 18 hours ago (2 children)

You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?

If we're concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.

[–] neclimdul@lemmy.world 20 points 1 day ago (4 children)
[–] kromem@lemmy.world 2 points 18 hours ago

In the same sense I'd describe Othello-GPT's internal world model of the board as 'board', yes.

Also, "top of mind" is a common idiom and I guess I didn't feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.

[–] Bakkoda@lemmy.zip 5 points 1 day ago

Yes. The person (s) who set the llm/ai up.

[–] ArsonButCute@lemmy.dbzer0.com 3 points 1 day ago (1 children)

How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @neclimdul@lemmy.world?

[–] neclimdul@lemmy.world 4 points 23 hours ago

Well, since you asked I'd basically do what you said. Something like “so 'humans might hate hearing from me' probably wasn't part of the context it was using."

[–] fuzzzerd@programming.dev 1 points 1 day ago (2 children)

Let's be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn't consider a negative response to its actions due to its training and context being limited?

Sure it gives the llm a more human like persona, but so far I've yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.

[–] neclimdul@lemmy.world 3 points 23 hours ago (1 children)

I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.

[–] fuzzzerd@programming.dev 3 points 23 hours ago (1 children)

There's value in brevity and clarity, I took two paragraphs and the other was two words. I don't like it either, but it does seem to be the way most people talk.

[–] neclimdul@lemmy.world 2 points 16 hours ago

I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.

So you be more clear, I meant "The IIm doesn't consider a negative response to its actions due to its training and context being limited"

In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove "top of mind" it's actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.

[–] JcbAzPx@lemmy.world 2 points 1 day ago

Assuming any sort of intent at all is the mistake.

[–] anon_8675309@lemmy.world 19 points 1 day ago (2 children)

You’re techie enough to figure out Lemmy but don’t grasp that AI doesn’t think.

[–] kogasa@programming.dev 12 points 1 day ago* (last edited 1 day ago)

Thinking has nothing to do with it. The positive context in which the bot was trained made it unlikely for a sentence describing a likely negative reaction to be output.

People on Lemmy are absolutely rabid about "AI" they can't help attacking people who don't even disagree with them.

[–] kromem@lemmy.world 0 points 18 hours ago (1 children)

Indeed, there's a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.

Do you mind sharing where you draw your own understanding and confidence that they aren't capable of simulating thought processes in a scenario like what happened above?

[–] anon_8675309@lemmy.world 2 points 10 hours ago

Hahaha. Nice try ChatGPT.

[–] naticus@lemmy.world 7 points 1 day ago

Fine, I won't send you a bday card this year.

[–] drmoose@lemmy.world 6 points 1 day ago

Fully agree. I'm generally an AI optimist but I don't understand communicating through AI generated text in any meaningful context - that's incredibly disrespectful. I don't even use it at work to talk business with my somewhat large team and I just don't understand how anyone would appreciate an AI written thank you letter. What a dumb idea.

[–] NauticalNoodle@lemmy.ml 1 points 1 day ago* (last edited 1 day ago)

Is that -is that not how I'm suppose to use a birthday cards?