this post was submitted on 05 Jun 2025
969 points (98.8% liked)

Not The Onion

16585 readers
1109 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] SharkEatingBreakfast@sopuli.xyz 17 points 3 days ago (1 children)

An OpenAI spokesperson told WaPo that "emotional engagement with ChatGPT is rare in real-world usage."

In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??

ffs

[–] T156@lemmy.world 5 points 2 days ago

Roomba, the robot vacuum cleaner company, had to institute a policy where they would preserve the original machine as much as possible, because people were getting attached to their robot vacuum cleaner, and didn't want it replaced outright, even when it was more economical to do so.

[–] zephorah@lemm.ee 25 points 3 days ago (2 children)

This sounds like a Reddit comment.

load more comments (2 replies)
[–] TheDeadlySquid@lemm.ee 5 points 2 days ago

And thus the flaw in AI is revealed.

[–] Godric@lemmy.world 15 points 3 days ago (1 children)

Cats can have a little salami, as a treat.

load more comments (1 replies)
[–] Gorilladrums@lemmy.world 12 points 3 days ago (5 children)

LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don't.

These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they've been trained on and the parameters they've been given. You can think of their results as "targeted randomness" which is why their results are close or sound convincing but are never quite right.

That's because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that's about it. They should never be used for anything serious like medical, legal, or life advice.

load more comments (5 replies)
load more comments
view more: ‹ prev next ›