this post was submitted on 08 Jun 2025
774 points (95.7% liked)

Technology

71143 readers
3000 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

you are viewing a single comment's thread
view the rest of the comments
[–] mfed1122@discuss.tchncs.de 13 points 1 day ago* (last edited 1 day ago) (5 children)

This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

[–] LesserAbe@lemmy.world 11 points 1 day ago (1 children)

Agreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.

[–] technocrit@lemmy.dbzer0.com -2 points 1 day ago* (last edited 1 day ago) (1 children)

... And so we should call machines "intelligent"? That's not how science works.

[–] LesserAbe@lemmy.world 6 points 23 hours ago

I think you're misunderstanding the argument. I haven't seen people here saying that the study was incorrect so far as it goes, or that AI is equal to human intelligence. But it does seem like it has a kind of intelligence. "Glorified auto complete" doesn't seem sufficient, because it has a completely different quality from any past tool. Supposing yes, on a technical level the software pieces together probability based on overtraining. Can we say with any precision how the human mind stores information and how it creates intelligence? Maybe we're stumbling down the right path but need further innovations.

[–] amelia@feddit.org 2 points 21 hours ago

This. Same with the discussion about consciousness. People always claim that AI is not real intelligence, but no one can ever define what real/human intelligence is. It's like people believe in something like a human soul without admitting it.

[–] Endmaker@ani.social 7 points 1 day ago (1 children)

You've hit the nail on the head.

Personally, I wish that there's more progress in our understanding of human intelligence.

[–] technocrit@lemmy.dbzer0.com 1 points 1 day ago

Their argument is that we don't understand human intelligence so we should call computers intelligent.

That's not hitting any nail on the head.

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago) (1 children)

why is it assumed that this isn’t what human reasoning consists of?

Because science doesn't work work like that. Nobody should assume wild hypotheses without any evidence whatsoever.

Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is.

You should get a job in "AI". smh.

[–] mfed1122@discuss.tchncs.de 5 points 1 day ago

Sorry, I can see why my original post was confusing, but I think you've misunderstood me. I'm not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement "AI doesn't actually reason, it just follows patterns". That is unscientific if we don't know whether or "actually reasoning" consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It's my personal, subjective feeling that human reasoning works by following patterns. But I'm not saying "AI does actually reason like humans because it follows patterns like we do". Again, I see how what I said could have come off that way. What I mean more precisely is:

It's not clear whether AI's pattern-following techniques are the same as human reasoning, because we aren't clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn't we have studies to back up the direction we lean in one way or the other?

I think you and I are in agreement, we're upholding the same principle but in different directions.

[–] count_dongulus@lemmy.world 1 points 1 day ago (2 children)

Humans apply judgment, because they have emotion. LLMs do not possess emotion. Mimicking emotion without ever actually having the capability of experiencing it is sociopathy. An LLM would at best apply patterns like a sociopath.

[–] mfed1122@discuss.tchncs.de 8 points 1 day ago* (last edited 1 day ago) (2 children)

But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we're not looking for emotional judgements - we're trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I'm not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.

As for analogizing LLMs to sociopaths, I think that's a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others' feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we're giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.

[–] MCasq_qsaCJ_234@lemmy.zip 3 points 1 day ago

In fact, simple computer programs do a great job of solving these puzzles.....

If an AI is trained to do this, it will be very good, like for example when a GPT-2 was trained to multiply numbers up to 20 digits.

https://nitter.net/yuntiandeng/status/1836114419480166585#m

Here they do the same test to GPT-4o, o1-mini and o3-mini

https://nitter.net/yuntiandeng/status/1836114401213989366#m

https://nitter.net/yuntiandeng/status/1889704768135905332#m

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago) (1 children)

In fact, simple computer programs do a great job of solving these puzzles...

Yes, this shit is very basic. Not at all "intelligent."

[–] mfed1122@discuss.tchncs.de 2 points 1 day ago

But reasoning about it is intelligent, and the point of this study is to determine the extent to which these models are reasoning or not. Which again, has nothing to do with emotions. And furthermore, my initial question about whether or not pattern following should automatically be disqualified as intelligence, as the person summarizing this study (and notably not the study itself) claims, is the real question here.

[–] riskable@programming.dev 1 points 1 day ago* (last edited 1 day ago)

That just means they'd be great CEOs!

According to Wall Street.