this post was submitted on 07 Jan 2026
847 points (97.7% liked)
Technology
78482 readers
2563 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments

Mind?
In the same sense I'd describe Othello-GPT's internal world model of the board as 'board', yes.
Also, "top of mind" is a common idiom and I guess I didn't feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
Yes. The person (s) who set the llm/ai up.
How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @neclimdul@lemmy.world?
Well, since you asked I'd basically do what you said. Something like “so 'humans might hate hearing from me' probably wasn't part of the context it was using."
Let's be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn't consider a negative response to its actions due to its training and context being limited?
Sure it gives the llm a more human like persona, but so far I've yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.
I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.
There's value in brevity and clarity, I took two paragraphs and the other was two words. I don't like it either, but it does seem to be the way most people talk.
I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.
So you be more clear, I meant "The IIm doesn't consider a negative response to its actions due to its training and context being limited"
In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove "top of mind" it's actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.
Assuming any sort of intent at all is the mistake.