this post was submitted on 08 Nov 2025
694 points (97.4% liked)

Technology

76678 readers
1929 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Hammock_tann@lemmy.world -3 points 1 day ago (3 children)

Technically, LLMs aren't ai. What they do is basically predict relationship between words. They can't reason or count or learn.

[–] leftzero@lemmy.dbzer0.com 4 points 1 day ago* (last edited 1 day ago)

Exactly. Nothing technical about it: they simply produce the statistically most likely token (in their training model) to follow a given list of tokens.

Any information contained in their output (other than the fact that each of the tokens is probably the most statistically likely to appear after the previous ones in the texts used as their models, which I imagine could be useful for philologists) is purely circumstantial, and was already contained in their training model.

There's no reasoning involved in the process (other than possibly in the writing of the texts in their training mode if they predate LLM, if we're feeling optimistic about human intelligence), nor any mechanism in the LLM for reasoning to take place.

They are as far from AI as Markov chains were, just slightly more correct in their token likelihood predictions and several orders of magnitude more costly.

And them being sold as AI doesn't make them any closer, it just means the people and companies selling them are scammers.

[–] survirtual@lemmy.world 4 points 1 day ago

"Technically"? Wrong word. By all technical measures, they are technically 100% AI.

What you might be trying to say is they aren't AGI (artificial general intelligence). I would argue they might just be AGI. For instance, they can reason about what they are better than you can, while also being able to draw a pelican riding a unicycle.

What they certainly aren't is ASI (artificial super-intelligence). You can say they technically aren't ASI and you would be correct. ASI would be capable of improving itself faster than a human would be capable.

[–] I_Clean_Here@lemmy.world 1 points 1 day ago (1 children)
[–] survirtual@lemmy.world 0 points 1 day ago

Careful, my other comment got removed because of a witty but still insightful dig.

They are very sensitive here about how the AI isn't really AI.