this post was submitted on 19 Aug 2025
358 points (95.9% liked)

Technology

74247 readers
5984 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”

https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Eggyhead@lemmings.world 61 points 1 day ago (17 children)

It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.

[–] Evotech@lemmy.world 44 points 1 day ago

It doesn't admit anything, it's a language machine

[–] CosmoNova@lemmy.world 25 points 1 day ago

It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.

So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.

[–] bestboyfriendintheworld@sh.itjust.works 27 points 1 day ago (1 children)

Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.

[–] Lodespawn@aussie.zone 13 points 1 day ago

It's literally a guess machine ..

[–] melroy@kbin.melroy.org 8 points 22 hours ago

It's a feature. Not a bug of LLMs.

[–] Squizzy@lemmy.world 9 points 1 day ago (1 children)

It wouldnt finish a lyric for me yesterday because it was copyrighted. I sid it was public domain and it said "You are absolutely right, given its release date it is under copyright protection"

Wtf

[–] int32@lemmy.dbzer0.com 9 points 23 hours ago

yeah, there are guardrails but for copyright, not for bullshit. ig they think copyrighted content is worse than bullshit.

[–] WanderingThoughts@europe.pub 9 points 1 day ago

In the end it's a word generator that has been trained so much it uses facts often enough to be convincing. That's its basic architecture.

You can ask it to give a confidence level to have an indication of how sure it is of the answer.

[–] JayGray91@piefed.social 7 points 1 day ago

Someone I know (not close enough to even call an "internet friend") formed a sadistic bond with chatGPT and will force it to apologize and admit being stupid or something like that when he didn't get the answer he's looking for.

I guess that's better than doing it to a person I suppose.

Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.

load more comments (9 replies)
[–] pivot_root@lemmy.world 118 points 1 day ago (4 children)

“I literally lost my only friend overnight with no warning,” one person posted on Reddit

It was meant to be satirical at the time, but maybe Futurama wasn't entirely off the mark. That Redditor isn't quite at that level, but it's still probably not healthy to form an emotional attachment to the Markov chain equivalent of a sycophantic yes-man.

[–] acosmichippo@lemmy.world 19 points 1 day ago* (last edited 1 day ago) (1 children)

Markov chain equivalent of a sycophantic yes-man.

not only that, but one that is fully owned and operated by a business that could change it any time they want, or even cease to exist completely.

This isn’t like a game where you could run your own server if you’re a big enough fan. if chatgpt stops existing in its current form that’s it.

[–] possumparty@lemmy.blahaj.zone 3 points 14 hours ago

sure but you can absolutely run c.ai instances locally. 4o and it's cross chat memory was probably more useful to these individuals though.

[–] Truscape@lemmy.blahaj.zone 17 points 1 day ago

After reading about the ELIZA effect, I both learned how people are super susceptible to this, and just need to remember the core tenants of it to avoid getting affected:

https://en.m.wikipedia.org/wiki/ELIZA_effect

[–] Veedem@lemmy.world 39 points 1 day ago (3 children)

I’m honestly surprised your’s is not the top comment. Like, whatever, the launch was bad, but there is a serious mental health crisis if people are forming emotional bonds to the software.

[–] Artisian@lemmy.world 21 points 1 day ago (3 children)

Humans emotionally bond pretty easily, no? Like, we have folks attached to roombas, spiders, TV shows, and stuffed animals. Having a hard time thinking of anything X that I don't personally know a person Y with Y emotionally engaged with X. Maybe taxes and concrete?

load more comments (3 replies)
[–] Pilferjinx@lemmy.world 13 points 1 day ago

It's a human trait. Hell, we'll even emotionally bond with a volleyball given circumstances.

load more comments (1 replies)
[–] antonim@lemmy.dbzer0.com 24 points 1 day ago (5 children)

There's an entire active subreddit for people who have a "romantic relationship" with AI. It's terrifying.

load more comments (5 replies)
[–] Auth@lemmy.world 141 points 1 day ago (10 children)

"we fucked up our massive new generation product launch.. oh well lets invest trillions in new data centers" How do investors keep falling for this shit.

[–] MadMadBunny@lemmy.ca 35 points 1 day ago (3 children)

Don’t they have enough?!? How about they fix and optimize their fancy autocompletion software instead?

[–] Rhaedas@fedia.io 38 points 1 day ago (2 children)

They took a path they believed would develop into something, and it's a narrow alley they can't turn around in. They have to keep going with more compute and power to continue the chase. Thing is, everyone else seemingly thought they were onto something and followed as well, so they're all in the same predicament where reversing course is suicide. So they hope they can keep selling the dream a bit longer until something happens.

To be fair, it's a lot more than just autocomplete. But it's a lot less than what they wanted by now too.

load more comments (2 replies)
[–] Ulrich@feddit.org 12 points 1 day ago

Don’t they have enough?!?

No no, it's just 1 more data center bro, then we'll fix the hallucinations, promise bro!

load more comments (1 replies)
[–] ExLisper@lemmy.curiana.net 3 points 22 hours ago

He's saying the launch was done badly because some users are in love with GPT-4 and it should not be removed. From a point of view of a investor having people addicted to your product is a good thing.

load more comments (8 replies)
[–] MonkderVierte@lemmy.zip 13 points 23 hours ago

Eh. Your load of money made a oopsie. Another load of money will surely fix it.

[–] tonytins@pawb.social 72 points 1 day ago (1 children)

Altman also said that he thinks we’re in an AI “bubble.”

No shit, Sherlock.

[–] MadMadBunny@lemmy.ca 35 points 1 day ago (1 children)

He fucking helped create it

[–] CosmoNova@lemmy.world 11 points 1 day ago

Hell, he‘s the single main driver. What stupid times we live in.

[–] Treczoks@lemmy.world 7 points 22 hours ago

How about your responsibility for the damaging and lethal product of yours, OpenAI?

[–] Ulrich@feddit.org 14 points 1 day ago (1 children)

Well one thing's for sure, data centers are going to be insanely cheap in the near future.

[–] PotatoesFall@discuss.tchncs.de 5 points 1 day ago (7 children)

And they'll all be optimized for GPU workloads :(

[–] Voroxpete@sh.itjust.works 5 points 21 hours ago (1 children)

If anyone actually spent money on science anymore, I bet this would be great for, like, protein folding, that sort of thing.

Terrible for running websites though.

load more comments (1 replies)
load more comments (6 replies)
[–] captainastronaut@seattlelunarsociety.org 37 points 1 day ago (1 children)

That someone is so attached to this stochastic parrot is truly disturbing.

load more comments (1 replies)
[–] ur_ONLEY_freind@lemmy.zip 23 points 1 day ago

Every picture of this guys face feels like " I don't know how I got here and i'm afraid to touch anything"

[–] SkaveRat@discuss.tchncs.de 20 points 1 day ago

Sam Altman admits Rambling meth dealer ‘totally screwed up’ its super meth launch and says the company will spend trillions of dollars on data centers

I love my AI hype word replacement script

[–] Sgt_choke_n_stroke@lemmy.world 15 points 1 day ago (11 children)

Besides helping students cheat. What does AI actually do? It gets answers wrong. It gets facts wrong, foreign countries are actively feeding its training algorithm wrong info [Russia]. It almost like the old birds that were mystified by landing on the moon are still chasing that American success high.

Spend your money if you want. Life in america is not gonna get better with this.

[–] M1ch431@slrpnk.net 3 points 18 hours ago* (last edited 18 hours ago) (2 children)

You can be assured that it's not just Russia and China feeding it garbage. There is a vast amount of propaganda in all forms of media that AI is trained on, and a lot likely originates from the west.

load more comments (2 replies)
load more comments (10 replies)
load more comments
view more: ‹ prev next ›