this post was submitted on 22 Apr 2025
1563 points (98.9% liked)

Memes

50016 readers
449 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] dxdydz@slrpnk.net 38 points 1 week ago (8 children)

LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.

Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.

Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.

[–] Umbrias@beehaw.org 2 points 1 week ago

you can poison the well this way too, ultimately, but it's important to note: generally it is not llm cleaning this up, it's slaves. generally in terrible conditions.

load more comments (7 replies)