this post was submitted on 08 Aug 2025
771 points (96.6% liked)

Technology

73878 readers
3577 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Or my favorite quote from the article

"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.

you are viewing a single comment's thread
view the rest of the comments
[–] Mediocre_Bard@lemmy.world 24 points 2 days ago (4 children)

Did we create a mental health problem in an AI? That doesn't seem good.

[–] Agent641@lemmy.world 9 points 2 days ago (1 children)

One day, an AI is going to delete itself, and we'll blame ourselves because all the warning signs were there

[–] Aggravationstation@feddit.uk 10 points 1 day ago

Isn't there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?

[–] buttnugget@lemmy.world 4 points 2 days ago (1 children)

Why are you talking about it like it’s a person?

[–] Mediocre_Bard@lemmy.world 6 points 1 day ago (1 children)

Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.

[–] buttnugget@lemmy.world 3 points 1 day ago (1 children)

It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.

[–] Mediocre_Bard@lemmy.world 1 points 18 hours ago

Yup. But people will still project one on to it, because that's how humans work.

[–] ICastFist@programming.dev 7 points 2 days ago

Considering it fed on millions of coders' messages on the internet, it's no surprise it "realized" its own stupidity

[–] Azal@pawb.social 4 points 2 days ago (1 children)

Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.

[–] Mediocre_Bard@lemmy.world 1 points 1 day ago

Let's hope. Though, adding suicidal depression to hallucinations has, historically, not gone great.