this post was submitted on 08 Jun 2025
450 points (95.0% liked)

Technology

71083 readers
3127 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] intensely_human@lemm.ee 1 points 29 minutes ago

Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".

But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.

I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

[–] GaMEChld@lemmy.world 9 points 2 hours ago (2 children)

Most humans don't reason. They just parrot shit too. The design is very human.

[–] elbarto777@lemmy.world 2 points 24 minutes ago

LLMs deal with tokens. Essentially, predicting a series of bytes.

Humans do much, much, much, much, much, much, much more than that.

[–] SpaceCowboy@lemmy.ca 1 points 1 hour ago

Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

[–] mavu@discuss.tchncs.de 41 points 4 hours ago

No way!

Statistical Language models don't reason?

But OpenAI, robots taking over!

[–] ZILtoid1991@lemmy.world 11 points 3 hours ago

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

[–] BlaueHeiligenBlume@feddit.org 6 points 3 hours ago

Of course, that is obvious to all having basic knowledge of neural networks, no?

[–] vala@lemmy.world 24 points 6 hours ago
[–] surph_ninja@lemmy.world 14 points 5 hours ago (3 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

[–] petrol_sniff_king@lemmy.blahaj.zone 12 points 4 hours ago (4 children)

Maybe you failed all your high school classes, but that ain't got none to do with me.

load more comments (4 replies)
[–] LemmyIsReddit2Point0@lemmy.world 12 points 5 hours ago

We also reward people who can memorize and regurgitate even if they don't understand what they are doing.

[–] silasmariner@programming.dev 2 points 4 hours ago

Some of them, sometimes. But some are adulated and free and contribute vast swathes to our culture and understanding.

[–] crystalmerchant@lemmy.world 3 points 4 hours ago (1 children)

I mean... Is that not reasoning, I guess? It's what my brain does-- recognizes patterns and makes split second decisions.

[–] mavu@discuss.tchncs.de 2 points 4 hours ago

Yes, this comment seems to indicate that your brain does work that way.

[–] bjoern_tantau@swg-empire.de 36 points 7 hours ago
[–] Jhex@lemmy.world 40 points 9 hours ago (1 children)

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[–] LonstedBrowryBased@lemm.ee 14 points 7 hours ago (2 children)

Yah of course they do they’re computers

[–] intensely_human@lemm.ee 1 points 27 minutes ago

Computers are better at logic than brains are. We emulate logic; they do it natively.

It just so happens there's no logical algorithm for "reasoning" a problem through.

[–] finitebanjo@lemmy.world 16 points 7 hours ago (3 children)

That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

[–] intensely_human@lemm.ee 1 points 26 minutes ago

They aren't bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

[–] EncryptKeeper@lemmy.world 12 points 6 hours ago (1 children)

TBH idk how people can convince themselves otherwise.

They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

[–] Blackmist@feddit.uk 4 points 4 hours ago (1 children)

It's no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

[–] EncryptKeeper@lemmy.world 2 points 3 hours ago

Yeah the excitement comes from the fact that they’re thinking of replacing themselves and keeping the money. They don’t get to “Step 2” in theirs heads lmao.

[–] turmacar@lemmy.world 10 points 7 hours ago* (last edited 7 hours ago) (1 children)

I think because it's language.

There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

[–] finitebanjo@lemmy.world 8 points 6 hours ago

I often feel like I'm surrounded by idiots, but even I can't begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

load more comments
view more: next ›