this post was submitted on 04 Mar 2026
747 points (97.9% liked)

Technology

82329 readers
3281 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] ordnance_qf_17_pounder@reddthat.com 37 points 2 days ago (5 children)

Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.

[–] XLE@piefed.social 30 points 2 days ago

Except in this case, Google is one of the companies promoting the chatbots to its users, telling them to trust them. They create TV ads telling people to talk to them. Today's scammers are the stock market's Magnificent Seven.

load more comments (4 replies)
[–] Reygle@lemmy.world 22 points 2 days ago* (last edited 2 days ago) (16 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


WHAT

Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

[–] throws_lemy@reddthat.com 18 points 2 days ago* (last edited 2 days ago) (2 children)

This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] sudo@lemmy.today 12 points 2 days ago

"abuse the ai's emotions" isn't a thing. Full stop.

This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

load more comments (1 replies)
[–] SalamenceFury@piefed.social 15 points 2 days ago* (last edited 2 days ago) (4 children)

I don't think this person was a "fuckwit". AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

load more comments (4 replies)
load more comments (14 replies)
[–] Stonewyvvern@lemmy.world 21 points 2 days ago (6 children)

Reality is really difficult for some people...

[–] Akuchimoya@startrek.website 14 points 2 days ago (1 children)

Truly, I don't understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it's all of the above and more.

I know a guy who routinely says, "I asked ChatGPT...", and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It's a total refusal to believe otherwise, but I can't fathom why.

load more comments (1 replies)
load more comments (5 replies)
[–] maplesaga@lemmy.world 9 points 2 days ago

Theres a Eula for that.

[–] IchNichtenLichten@lemmy.wtf 23 points 2 days ago (2 children)

In a sane universe people would be on trial for unleashing this shit on society.

load more comments (2 replies)
[–] man_wtfhappenedtoyou@lemmy.world 12 points 2 days ago (5 children)

How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don't understand how this keeps happening.

load more comments (5 replies)
[–] panda_abyss@lemmy.ca 18 points 2 days ago

This technology was not ready for release, yet they released it.

They do deserve to be sued, this was negligence.

load more comments
view more: ‹ prev next ›