this post was submitted on 08 Jun 2025
770 points (95.7% liked)

Technology

71143 readers
2960 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 4) 50 comments
sorted by: hot top controversial new old
[–] ZILtoid1991@lemmy.world 11 points 21 hours ago (3 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

load more comments (3 replies)
[–] brsrklf@jlai.lu 46 points 1 day ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[–] auraithx@lemmy.dbzer0.com 38 points 1 day ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

[–] technocrit@lemmy.dbzer0.com 16 points 1 day ago* (last edited 1 day ago)

Computers are awesome at "recognizing patterns" as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

[–] sev@nullterra.org 49 points 1 day ago (28 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

[–] kescusay@lemmy.world 18 points 1 day ago (3 children)

I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

But I don't think we're anywhere near there yet.

load more comments (3 replies)
load more comments (27 replies)
[–] BlaueHeiligenBlume@feddit.org 8 points 21 hours ago (1 children)

Of course, that is obvious to all having basic knowledge of neural networks, no?

load more comments (1 replies)
[–] SplashJackson@lemmy.ca 24 points 1 day ago (1 children)
load more comments (1 replies)
[–] technocrit@lemmy.dbzer0.com 23 points 1 day ago* (last edited 1 day ago) (6 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] tauonite@lemmy.world 15 points 22 hours ago

That's called science

[–] yeahiknow3@lemmings.world 23 points 1 day ago* (last edited 1 day ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] Mbourgon@lemmy.world 10 points 1 day ago (1 children)

Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

[–] limelight79@lemmy.world 4 points 19 hours ago* (last edited 19 hours ago)

Yep. I'm retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. "Why can't AI do it?"

The people I worked with are continuing the research and putting it up against the human coders, but...there was definitely an element of "AI can do that, we won't need people" next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the "AI can make all of these decisions"...for now.

The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that's not how an independent test works. We had to reel them back in.

load more comments (3 replies)
[–] surph_ninja@lemmy.world 9 points 23 hours ago (38 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

[–] LemmyIsReddit2Point0@lemmy.world 15 points 23 hours ago

We also reward people who can memorize and regurgitate even if they don't understand what they are doing.

load more comments (37 replies)
[–] reksas@sopuli.xyz 37 points 1 day ago (4 children)

does ANY model reason at all?

[–] 4am@lemm.ee 34 points 1 day ago (3 children)

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[–] LonstedBrowryBased@lemm.ee 12 points 1 day ago (2 children)

Yah of course they do they’re computers

[–] finitebanjo@lemmy.world 20 points 1 day ago (4 children)

That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

[–] EncryptKeeper@lemmy.world 15 points 1 day ago (2 children)

TBH idk how people can convince themselves otherwise.

They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

[–] Blackmist@feddit.uk 4 points 22 hours ago (1 children)

It's no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

load more comments (1 replies)
load more comments (1 replies)
[–] turmacar@lemmy.world 13 points 1 day ago* (last edited 1 day ago) (5 children)

I think because it's language.

There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

[–] finitebanjo@lemmy.world 9 points 1 day ago

I often feel like I'm surrounded by idiots, but even I can't begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

load more comments (4 replies)
load more comments (2 replies)
load more comments (1 replies)
load more comments
view more: ‹ prev next ›