this post was submitted on 11 Mar 2026
49 points (94.5% liked)

Technology

82518 readers
4556 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

top 22 comments
sorted by: hot top controversial new old
[–] unpossum@sh.itjust.works 3 points 3 hours ago

GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?

[–] RandAlThor@lemmy.ca 8 points 4 hours ago (2 children)

This is pretty bonkers. How TF are they fabricating answers?????

[–] bad1080@piefed.social 8 points 3 hours ago (1 children)
[–] snooggums@piefed.world 4 points 3 hours ago (2 children)

Aka being wrong, but with a fancy name!

When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.

[–] Scipitie@lemmy.dbzer0.com 13 points 3 hours ago (1 children)

Accepting concepts like "right" and "wrong" gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.

To be precise:

LLMs can't be right or wrong because the way they work has no link to any reality - it's stochastics, not evaluation. I also don't like the term halluzination for the same reason. It's simply a too high temperature setting jumping into a closeby but unrelated vector set.

Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It's then a "oh but wen make them better!" And their marketing departments overjoy.

To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.

We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though...

[–] CubitOom@infosec.pub 4 points 2 hours ago* (last edited 2 hours ago) (1 children)

What word would you propose to use instead?

Fabrication?

[–] Scipitie@lemmy.dbzer0.com 3 points 2 hours ago (1 children)

That's my problem: any single word humanizes the tool in my opinion. Iperhaps something like "stochastic debris" comes close but there's no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(

[–] Telorand@reddthat.com 3 points 2 hours ago (1 children)

We do enjoy pareidolia, don't we?

[–] deranger@sh.itjust.works 1 points 49 minutes ago

Paredolia just means seeing patterns that aren’t there, it’s not implicitly human. If you see a dog in the clouds, that’s paredolia.

[–] bad1080@piefed.social 4 points 3 hours ago

if you have a lobby you get special names, look at the pharma industry who coined the term "discontinuation syndrome" for a simple "withdrawal"

[–] ji59@hilariouschaos.com 1 points 3 hours ago* (last edited 3 hours ago)

Because guessing correct answer is more successful than saying nothing.

[–] CubitOom@infosec.pub 1 points 2 hours ago (2 children)

I'm not good at math, so someone please help me.

If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?

[–] hersh@literature.cafe 3 points 1 hour ago* (last edited 1 hour ago) (1 children)

If I understand you correctly: 63.4% odds of having at least one hallucination.

The simple way to calculate the odds of getting at least one error is to calculate the odds of having ZERO, and then inverting that.

If the odds of a single instance being an error is 1%, that means you have a 99% chance of having no errors. If you repeat that 100 times, then it's 99% of 99% of 99%...etc. In other words, 0.99^100 = 0.366. That's the odds of getting zero errors 100 times in a row. The inverse of that is 0.634, or 63.4%.

This is the same way to calculate the odds of N coin flips all coming up heads. It's going to be 0.5^N. So the odds of getting 10 heads in a row is 0.5^10 = ~0.0977%, or 1:1024.

Edit: This is assuming independence of all 100 prompts, which is not generally true in a single chat window, where each prompt follows the last and retains both the previous prompts and answers in its context. As the paper explains, error rate tends to increase with context length. You should generally start a new chat rather than continue in an existing one if the previous context is not highly relevant.

[–] CubitOom@infosec.pub 2 points 1 hour ago

Thanks, I also wonder how context collapse affects the fabrication rate.

[–] Telorand@reddthat.com 1 points 2 hours ago

One in 100. However, that is simple a measure of probability, so do not expect that to always be true for every 100 prompts.

For example, if you rolled a 100-sided die 100 times, it's possible to get a one every time. In practice, it would likely be a mix. You might have a session where you get no wrong answers and times when you get several.

The problem is that ignorant people trust these models implicitly, because they sound convincing and authoritative, and many people are not equipped to be able to vet the information being generated (also notice I didn't say "retrieved").

[–] FauxLiving@lemmy.world 1 points 3 hours ago (2 children)

At 32K, the best model (GLM 4.5) fabricates 1.19% of answers

Not bad, I don't know many people who are 98.81% accurate in their statements.

[–] Iconoclast@feddit.uk 2 points 1 hour ago* (last edited 1 hour ago)

It's a pleasure to meet you! The only thing exceeding my level of wisdom is my modesty.

[–] snooggums@piefed.world 4 points 3 hours ago* (last edited 3 hours ago) (1 children)

Calculators are correct 100% of the time.

[–] FauxLiving@lemmy.world 3 points 2 hours ago (1 children)

Calculators are not people, Mr. <1.19%.

[–] snooggums@piefed.world 2 points 2 hours ago* (last edited 1 hour ago) (2 children)

That's right! We should be comparing computers to computers. Well, hardware computers, not people computers.

[–] FauxLiving@lemmy.world 2 points 1 hour ago

Calculators are not computers, computers contain calculator-like elements but a calculator is no more a computer than a passenger jet is a coffee shop by virtue of having a coffee pot onboard.

Calculators cannot fabricate answers, but nor are they 100% correct due to things like bitflips and square root approximations. They also cannot write text, so the comparison would make even less sense.

LLMs and Humans can fabricate answers in written text so comparing the fabrication rate in written text of an LLM to a human (both entities which generate their answers with neural networks) makes more sense than to compare either to a calculator which neither uses a neural network or produces text.

So 'we' should compare like things and not choose items based on superficial similarities.

[–] ji59@hilariouschaos.com 0 points 1 hour ago

What do you even mean? Calculators and LLMs are solving different problems. And there are a lot of calculators and a lot of LLMs. Also, calculator accuracy could be approaching 0% because they all have limited precision and there are infinite numbers. Some of the calculators even can't correctly answer 0.1+0.2, while most LLMs can do that.