this post was submitted on 04 Mar 2026
747 points (97.9% liked)

Technology

82329 readers
3300 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] mdhughes@lemmy.sdf.org 0 points 11 hours ago (1 children)

He wasn't a fuckwit, he wasn't undisciplined, he wasn't badly parented. This is what happens when a normal Human is exposed to too much chatbot. This can and will happen to you, your "mental defenses" are not sufficient.

If we don't destroy it first, it will destroy us. #butlerianJihad

[–] echodot@feddit.uk 1 points 2 hours ago* (last edited 2 hours ago)

A little bit alarmist I feel, after all if it was this easy to be affected by AI about half the population would be dead by now, so clearly it's not that simple.

[–] Ilandar@lemmy.today 15 points 1 day ago (1 children)

I don't understand why so many people default to "wouldn't happen to me, that person was just stupid" every time this happens. Did you guys not read the bit where he was being encouraged to commit violence in public by the chatbot? If it's getting to that point then there is clearly a massive fucking problem that needs urgent addressing, regardless of the intelligence of the user.

[–] notacat@infosec.pub 10 points 1 day ago (1 children)

I think it’s similar to cults or abusive relationships. It’s not a matter of intellect, it’s how vulnerable a person is when they encounter this thing that they think could help them.

[–] Ilandar@lemmy.today 5 points 1 day ago

I agree. The connection between all of these things is that they involve relationships. Humans are social animals that can suffer from loneliness and AI companies are exploiting this in a similar way. Loneliness is a common thread throughout all of these AI psychosis suicide cases.

[–] eestileib@lemmy.blahaj.zone 38 points 1 day ago

I mentioned this story to my friend: "it only took six weeks of using Gemini to decide to kill himself wtf"

He immediately replied "I have to use Gemini at work and I get where he was coming from"

[–] BranBucket@lemmy.world 155 points 2 days ago* (last edited 2 days ago) (37 children)

People don't often realize how subtle changes in language can change our thought process. It's just how human brains work sometimes.

The old bit about smoking and praying is a great example. If you ask a priest if it's alright to smoke when you pray, they're likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it's alright to pray while you're smoking, they'd probably say yes, as you should feel free to pray to God whenever you need...

Now, make a machine that's designed to be agreeable, relatable, and makes persuasive arguments but that can't separate fact from fiction, can't reason, has no way of intuiting it's user's mental state beyond checking for certain language parameters, and can't know if the user is actually following it's suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible...

You get one answer that leads you a set direction, then another, then another... It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn't a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.

Are we surprised some people's thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.

[–] HeyThisIsntTheYMCA@lemmy.world 42 points 2 days ago (1 children)

People don’t often realize how subtle changes in language can change our thought process.

just changing a single word in your daily usage can change your entire outlook from negative to positive. it's strange, but unless you've experienced it yourself how such minute changes can have such large effects it's hard to believe.

[–] BranBucket@lemmy.world 11 points 2 days ago (3 children)

And this is hard for me, actually. Because of my work background and the jargon used, I'm unconsciously negative about things a lot of the time. It's a tough habit to break.

load more comments (3 replies)
load more comments (36 replies)
[–] Cyv_@lemmy.blahaj.zone 206 points 2 days ago* (last edited 2 days ago) (9 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

Well, that's pretty fucked up... Sometimes I see these and I think, "well even a human might fail and say something unhelpful to somebody in crisis" but this is just complete and total feeding into delusions.

[–] XLE@piefed.social 136 points 2 days ago (1 children)

It's hard reading this while remembering that your electricity bills are increasing so that Google's data centers can provide these messages to people.

[–] VieuxQueb@lemmy.ca 11 points 1 day ago

And you won't be able to afford a computer or power it anyways.

[–] wonderingwanderer@sopuli.xyz 34 points 2 days ago* (last edited 2 days ago) (7 children)

That's fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?

[–] SalamenceFury@piefed.social 62 points 2 days ago* (last edited 2 days ago) (3 children)

In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they're essentially sycophantic by design.

[–] 8uurg@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

Most LLM chatbots don't push back when they should. When combined with situations like these, at a large scale, even 5 percent is abysmal, let alone 55 percent.

load more comments (2 replies)
load more comments (6 replies)
load more comments (7 replies)
[–] teft@piefed.social 118 points 2 days ago (6 children)

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

Just remember that these language models are also advising governments and military units.

Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

[–] XLE@piefed.social 47 points 2 days ago

AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic's Claude will tell you what you want to hear?

Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.

load more comments (5 replies)
[–] SalamenceFury@piefed.social 58 points 2 days ago (2 children)

As a neurodivergent person, i've noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don't know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that's why it's so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were...

load more comments (2 replies)
[–] Grimy@lemmy.world 58 points 2 days ago* (last edited 2 days ago) (1 children)

“On September 29, 2025, it sent him ... the chatbot pretended to check it against a live database.

I usually don't give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.

Edit: removed the quote since an other user posted it at the same time and it's a bit of a wall of text to have twice.

load more comments (1 replies)
load more comments
view more: next ›