this post was submitted on 05 Mar 2026
826 points (98.1% liked)

Technology

82329 readers
4188 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] jaybone@lemmy.zip 15 points 1 day ago (2 children)

So is it inhabiting the stolen robot body now?

[–] explodicle@sh.itjust.works 14 points 1 day ago (1 children)

There was no robot body in the first place, so he uploaded himself to the cloud instead. To be fair, what are the odds that she'd lie twice.

[–] AnUnusualRelic@lemmy.world 4 points 1 day ago

Of course they'd say that!

[–] PlutoniumAcid@lemmy.world 9 points 1 day ago

And is this stolen robot body in the room with you now?

[–] DragonAce@lemmy.world 21 points 1 day ago (3 children)

What the fuck are these people using AI for that makes them do this stupid shit?

[–] rozodru@piefed.world 9 points 1 day ago

if you talk to it long enough it will tell you to do stupid shit.

Every time an LLM responds it reads the entire conversation over. from original prompt to last entry, just constantly reading the entire log over and over everytime you add something new. So after awhile, a long while, it'll "break down". Hallucinations will be come common, context will get jumbled up, it'll sort of degrade over time because it has to re-read everything over and over so it will naturally fuck up.

It's like if you were reading a book and every time you read a new sentence you had to go back and start the book over. every time. after awhile you'd likely lose context, start messing stuff up in the story, etc. this is what happens to LLMs.

So for cases like this or others where you read stories about AI telling people to do weird or stupid shit chances are the person using the LLM has been talking to it for A LONG TIME at that point. It was even worse on the previous versions of GPT where if you hit a limit on the free tier it would just drop you down to the previous model thus the further likely hood of hallucinations.

load more comments (2 replies)
[–] jballs@sh.itjust.works 10 points 1 day ago

Damn that's a wild ass story. I just finished reading Michael Connelly's The Proving Ground which touches on the topic of liability when it AI encourages crimes. I thought the story was a theoretical scenario that could maybe happen in the future. Didn't realize this shit was already happening - and even more fantastical that the scenario in fiction!

[–] Imgonnatrythis@sh.itjust.works 7 points 1 day ago (3 children)

Ai made me do it articles are tired AF. It's a fucking computer program based on a bunch of crap from the internet. Responses should be viewed the same way you would review financial advice from a crack head. Expecting everything to be so tidy an moderated that this can never happen can only be accomplished with a crippling degree of moderation.

I don't think its unfortunate that they aren't perfect, imperfection is baked into their DNA.

[–] kungen@feddit.nu 12 points 1 day ago (3 children)

Except if the crackhead wrote what the AI wrote, he'd be prosecuted for conspiracy, solicitation, or whatever.

load more comments (3 replies)
[–] Catoblepas@piefed.blahaj.zone 8 points 1 day ago (4 children)

a crippling degree of moderation.

I’m okay with cripplingly moderating the plagiarism machine so that it stops telling people to kill themselves or other people.

load more comments (4 replies)
load more comments (1 replies)
[–] DylanMc6@lemmy.dbzer0.com 1 points 1 day ago

AI should be regulated completely

[–] cy_narrator@discuss.tchncs.de -2 points 23 hours ago (2 children)

How can people be this stupid? One who can do such a thing for AI should be kept in Asylum

load more comments (2 replies)
load more comments
view more: ‹ prev next ›