this post was submitted on 15 Dec 2025
757 points (98.6% liked)

Technology

77873 readers
3134 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ceenote@lemmy.world 200 points 1 week ago (17 children)

So, like with Godwin's law, the probability of a LLM being poisoned as it harvests enough data to become useful approaches 1.

[–] Gullible@sh.itjust.works 109 points 1 week ago (14 children)

I mean, if they didn’t piss in the pool, they’d have a lower chance of encountering piss. Godwin’s law is more benign and incidental. This is someone maliciously handing out extra Hitlers in a game of secret Hitler and then feeling shocked at the breakdown in the game

[–] saltesc@lemmy.world 32 points 1 week ago* (last edited 1 week ago) (9 children)

Yeah but they don't have the money to introduce quality governance into this. So the brain trust of Reddit it is. Which explains why LLMs have gotten all weirdly socially combative too; like two neckbeards having at it—Google skill vs Google skill—is a rich source of A+++ knowledge and social behaviour.

[–] yes_this_time@lemmy.world 13 points 1 week ago (2 children)

If I'm creating a corpus for an LLM to consume, I feel like I would probably create some data source quality score and drop anything that makes my model worse.

[–] wizardbeard@lemmy.dbzer0.com 18 points 1 week ago (1 children)

Then you have to create a framework for evaluating the effect of the addition of each source into "positive" or "negative". Good luck with that. They can't even map input objects in the training data to their actual source correctly or consistently.

It's absolutely possible, but pretty much anything that adds more overhead per each individual input in the training data is going to be too costly for any of them to try and pursue.

O(n) isn't bad, but when your n is as absurdly big as the training corpuses these things use, that has big effects. And there's no telling if it would actually only be an O(n) cost.

[–] yes_this_time@lemmy.world 8 points 1 week ago (1 children)

Yeah, after reading a bit into it. It seems like most of the work is up front, pre filtering and classifying before it hits the model, to your point the model training part is expensive...

I think broadly though, the idea that they are just including the kitchen sink into the models without any consideration of source quality isn't true

[–] hoppolito@mander.xyz 6 points 1 week ago (2 children)

As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:

  1. The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.

    There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.

  2. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.

    If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.

[–] WhiteOakBayou@lemmy.world 14 points 1 week ago (1 children)

like the LLM that was finding cancers and people were initially impressed but then they figured out the LLM had just correlated a DR's name on the scan to a high likelihood of cancer. Once the complicating data point was removed, the LLM no longer performed impressively. Point #2 is very Goodhart's law adjacent.

[–] bitjunkie@lemmy.world 2 points 6 days ago (1 children)

I never knew the name for this law, but it's basically how SEO ruined traditional search. I think it's also a big reason that a LOT of software engineers put way too much emphasis on passing unit tests and not nearly enough on examining what they're actually testing.

[–] phutatorius@lemmy.zip 2 points 5 days ago

It's a special case of the buiness-school dictum that a metric that is made into a performance measure immediately becomes useless, since there are now incentives to game it.

[–] yes_this_time@lemmy.world 4 points 1 week ago

Good points. What's novel information vs. wrong information? (And subtly wrong is harder to understand than very wrong)

At some point it's hitting a user who is giving feedback, but I imagine data lineage once it gets to the end user its tricky to understand.

load more comments (6 replies)
load more comments (10 replies)
load more comments (12 replies)