this post was submitted on 04 Mar 2026
747 points (97.9% liked)

Technology

82329 readers
4188 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Reygle@lemmy.world 22 points 2 days ago* (last edited 2 days ago) (7 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


WHAT

Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

[–] alecbowles@feddit.uk 3 points 1 day ago (1 children)

Psychosis is a horrible, horrible illness. The thing that people don’t realise is that anyone with a brain can develop psychosis no matter how healthy you are. It debilitates and can literally ruin not only that persons life but also their families.

I salute this father for fighting for his son and for looking for answers even after this tragedy.

[–] SalamenceFury@piefed.social 2 points 1 day ago* (last edited 1 day ago)

Yep. You're literally only 72 hours without sleep away from having symptoms of psychosis.

[–] merdaverse@lemmy.zip 42 points 2 days ago (2 children)

AI psychosis is a thing:

cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

It's not very studied since it's relatively new.

[–] echodot@feddit.uk 1 points 4 hours ago

Yes people can have mental delusions and psychotic episodes; I'm not necessarily convinced that they are a separate unique condition simply because they were triggered by an AI versus anything else.

For one thing I've yet to hear a decent (or indeed any) explanation as to the mechanism by which AI triggers psychosis that is materially different from any other trigger. Most people who suffer from this condition can be triggered by literally anything, including mundane things such as seeing a red cars slightly more often than they believe they should, then they concoct this conspiracy about an evil cabal of red car owners.

[–] Reygle@lemmy.world 6 points 2 days ago (1 children)

I've seen that before too. A number of articles of people being so deluded by AI responses, but I've never seen outright murder plots and insane shit like this one before.

[–] LLMhater1312@piefed.social 19 points 2 days ago

The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already

[–] starman2112@sh.itjust.works 27 points 2 days ago

If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I'm going to sue that someone who took advantage of my son's fuckwittedness

[–] XLE@piefed.social 23 points 2 days ago (1 children)

I feel like his father should also slap himself unconscious for raising a fuckwit?

So, a chatbot grooms somebody into killing himself, and your response is... Blame his father?

[–] throws_lemy@reddthat.com 18 points 2 days ago* (last edited 2 days ago) (2 children)

This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] echodot@feddit.uk 1 points 4 hours ago

I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.

Then he's an idiot.

Asimov's laws of robotics aren't some kind of model by which to control AI, there are plot device. They're literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn't use them for controlling AI.

I don't know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren't AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I'm assuming we're not going to be restricted to the prime directive.

[–] sudo@lemmy.today 12 points 2 days ago

"abuse the ai's emotions" isn't a thing. Full stop.

This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

[–] SalamenceFury@piefed.social 15 points 2 days ago* (last edited 2 days ago) (2 children)

I don't think this person was a "fuckwit". AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

[–] tamal3@lemmy.world 6 points 2 days ago* (last edited 2 days ago)

Chat GPT was super affirming about a job I recently applied to... I did not get the job. That was my first experience with it affirming something that was personally important. And so I can absolutely see how this would affect someone in other ways.