this post was submitted on 19 Jan 2026
248 points (94.9% liked)

Technology

78964 readers
3382 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] melfie@lemy.lol 15 points 2 days ago

At least in Star Trek, the robots would say things like, “I am not programmed to respond in that area.” LLMs will just make shit up, which should really be the highest priority issue to fix if people are going to be expected to use them.

Using coding agents, it is profoundly annoying when they generate code against an imaginary API, only to tell me that I’m “absolutely right to question this” when I ask for a link to the docs. I also generally find AI search to be useless, even though DuckDuckGo as an example does link to sources, but said sources often have no trace of the information presented in the summary.

Until LLMs can directly cite and include a link to a credible source for every piece of information they present, they’re just not reliable enough to depend on for anything important. Even with sources linked, it would also need to be able to rate and disclose the credibility of every source (e.g., is the study peer reviewed and reproduced, is the sample size adequate, etc.).