this post was submitted on 23 Oct 2025
577 points (99.1% liked)

Technology

76299 readers
2953 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Rothe@piefed.social 15 points 17 hours ago (2 children)

It doesn't really matter, "AI" is being asked to do a task it was never meant to do. It isn't good at it, and it will never be good at it.

[–] Cocodapuf@lemmy.world 1 points 7 hours ago* (last edited 7 hours ago)

Wow, way to completely ignore the content of the comment you're replying to. Clearly, some are better than others... so, how do the others perform? It's worth knowing before we make assertions.

The excerpt they quoted said:

"Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance."

So that implies that "the other assistants" performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said "more than double the other assistants", does that mean double the rate of one of the others or double the average of the others? If it's an average it would mean that some models probably performed better, while others performed worse.

This was the point, what was reported was insufficient information.

[–] snooggums@piefed.world 15 points 16 hours ago (2 children)

Using an LLM to return accurate information is like using a shoe to hammer a nail.

[–] athatet@lemmy.zip 5 points 11 hours ago

Except that a shoe is vaguely hammer ish. More like pounding a screw in with your forehead.

[–] Rooster326@programming.dev 2 points 10 hours ago (1 children)
[–] snooggums@piefed.world 4 points 10 hours ago

Nope, my soles are too soft.