this post was submitted on 08 Jun 2025
755 points (95.9% liked)

Technology

71083 readers
2957 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

you are viewing a single comment's thread
view the rest of the comments
[–] spankmonkey@lemmy.world 7 points 1 day ago (1 children)

Reasoning is limited

Most people wouldn't call zero of something 'limited'.

[–] auraithx@lemmy.dbzer0.com 10 points 1 day ago (2 children)

The paper doesn’t say LLMs can’t reason, it shows that their reasoning abilities are limited and collapse under increasing complexity or novel structure.

[–] technocrit@lemmy.dbzer0.com 4 points 1 day ago

The paper doesn’t say LLMs can’t reason

Authors gotta get paid. This article is full of pseudo-scientific jargon.

[–] spankmonkey@lemmy.world 4 points 1 day ago (1 children)

I agree with the author.

If these models were truly "reasoning," they should get better with more compute and clearer instructions.

The fact that they only work up to a certain point despite increased resources is proof that they are just pattern matching, not reasoning.

[–] auraithx@lemmy.dbzer0.com 6 points 1 day ago (1 children)

Performance eventually collapses due to architectural constraints, this mirrors cognitive overload in humans: reasoning isn’t just about adding compute, it requires mechanisms like abstraction, recursion, and memory. The models’ collapse doesn’t prove “only pattern matching”, it highlights that today’s models simulate reasoning in narrow bands, but lack the structure to scale it reliably. That is a limitation of implementation, not a disproof of emergent reasoning.

[–] technocrit@lemmy.dbzer0.com 0 points 1 day ago (1 children)

Performance collapses because luck runs out. Bigger destruction of the planet won't fix that.

[–] auraithx@lemmy.dbzer0.com 1 points 1 day ago (2 children)

Brother you better hope it does because even if emissions dropped to 0 tonight the planet wouldnt stop warming and it wouldn't stop what's coming for us.

[–] LostXOR@fedia.io 3 points 1 day ago (1 children)

If emissions dropped to 0 tonight, we would be substantially better off than if we maintain our current trajectory. Doomerism helps nobody.

[–] auraithx@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago)

It’s not doomerism it’s just realistic. Deluding yourself won’t change that.

[–] MCasq_qsaCJ_234@lemmy.zip 0 points 1 day ago (1 children)

If the situation gets dire, it's likely that the weather will be manipulated. Countries would then have to be convinced not to use this for military purposes.

[–] auraithx@lemmy.dbzer0.com 2 points 1 day ago

This isn’t a thing.