this post was submitted on 08 Jun 2025
564 points (95.8% liked)

Technology

71083 readers
3018 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] bjoern_tantau@swg-empire.de 37 points 14 hours ago
[–] Jhex@lemmy.world 45 points 15 hours ago (1 children)

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[–] Nanook@lemm.ee 190 points 19 hours ago (40 children)

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[–] MNByChoice@midwest.social 58 points 18 hours ago (1 children)

The "Apple" part. CEOs only care what companies say.

[–] kadup@lemmy.world 35 points 17 hours ago (5 children)

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[–] homesweethomeMrL@lemmy.world 15 points 14 hours ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

load more comments (4 replies)
load more comments (39 replies)
[–] surph_ninja@lemmy.world 9 points 12 hours ago (9 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

[–] LemmyIsReddit2Point0@lemmy.world 13 points 11 hours ago

We also reward people who can memorize and regurgitate even if they don't understand what they are doing.

[–] silasmariner@programming.dev 2 points 10 hours ago

Some of them, sometimes. But some are adulated and free and contribute vast swathes to our culture and understanding.

load more comments (7 replies)
[–] LonstedBrowryBased@lemm.ee 15 points 14 hours ago (2 children)

Yah of course they do they’re computers

[–] finitebanjo@lemmy.world 19 points 13 hours ago (4 children)

That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

[–] EncryptKeeper@lemmy.world 15 points 12 hours ago (2 children)

TBH idk how people can convince themselves otherwise.

They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

[–] Blackmist@feddit.uk 5 points 10 hours ago (1 children)

It's no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

[–] EncryptKeeper@lemmy.world 3 points 10 hours ago

Yeah the excitement comes from the fact that they’re thinking of replacing themselves and keeping the money. They don’t get to “Step 2” in theirs heads lmao.

load more comments (1 replies)
[–] turmacar@lemmy.world 12 points 13 hours ago* (last edited 13 hours ago) (3 children)

I think because it's language.

There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

[–] finitebanjo@lemmy.world 10 points 12 hours ago

I often feel like I'm surrounded by idiots, but even I can't begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

load more comments (2 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] SplashJackson@lemmy.ca 23 points 15 hours ago (1 children)
load more comments (1 replies)
[–] technocrit@lemmy.dbzer0.com 25 points 16 hours ago* (last edited 16 hours ago) (4 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] TheRealKuni@midwest.social 26 points 13 hours ago (2 children)

Why would they "prove" something that's completely obvious?

I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

load more comments (2 replies)
[–] yeahiknow3@lemmings.world 21 points 16 hours ago* (last edited 16 hours ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] Mbourgon@lemmy.world 10 points 14 hours ago (1 children)

Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

load more comments (1 replies)
load more comments (1 replies)
[–] brsrklf@jlai.lu 41 points 17 hours ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[–] auraithx@lemmy.dbzer0.com 33 points 17 hours ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

[–] technocrit@lemmy.dbzer0.com 14 points 15 hours ago* (last edited 15 hours ago)

Computers are awesome at "recognizing patterns" as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

[–] sev@nullterra.org 43 points 18 hours ago (22 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

load more comments (22 replies)
[–] reksas@sopuli.xyz 34 points 19 hours ago (4 children)

does ANY model reason at all?

[–] 4am@lemm.ee 32 points 18 hours ago (3 children)

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[–] crystalmerchant@lemmy.world 2 points 10 hours ago (1 children)

I mean... Is that not reasoning, I guess? It's what my brain does-- recognizes patterns and makes split second decisions.

[–] mavu@discuss.tchncs.de 6 points 10 hours ago

Yes, this comment seems to indicate that your brain does work that way.

load more comments
view more: ‹ prev next ›