this post was submitted on 01 Jun 2025
274 points (96.3% liked)

Technology

71083 readers
3475 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I found the aeticle in a post on the fediverse, and I can't find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don't "know" how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

you are viewing a single comment's thread
view the rest of the comments
[–] glizzyguzzler@lemmy.blahaj.zone 68 points 1 week ago (49 children)

Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

[–] AnneBonny@lemmy.dbzer0.com 7 points 6 days ago (1 children)

How would you prove that someone or something is capable of reasoning or thinking?

[–] glizzyguzzler@lemmy.blahaj.zone 6 points 6 days ago (2 children)

You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

[–] theunknownmuncher@lemmy.world 19 points 6 days ago* (last edited 6 days ago) (1 children)

Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn't post a relevant or complete thought

Your comment is like saying an audio file isn't really music because it's just a series of numbers.

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 6 days ago* (last edited 6 days ago) (3 children)

Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

“An LLM model isn’t really an LLM because it’s just a series of numbers”

But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

[–] theunknownmuncher@lemmy.world 5 points 6 days ago* (last edited 6 days ago) (1 children)

LOL you didn't really make the point you thought you did. It isn't an "improper comparison" (it's called a false equivalency FYI), because there isn't a real distinction between information and this thing you just made up called "basic action on data", but anyway have it your way:

Your comment is still exactly like saying an audio pipeline isn't really playing music because it's actually just doing basic math.

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 3 days ago (1 children)

I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

[–] theunknownmuncher@lemmy.world 0 points 3 days ago* (last edited 2 days ago) (1 children)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

Incorrect. You might want to take an information theory class before speaking on subjects like this.

I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.

Lmao yup totally, it's not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it's a dead research field because it's already "settled". (You're wrong 🤭)

LLMs are just tools not sentient or verging on sentient

Correct. No one claimed they are "sentient" (you actually mean "sapient", not "sentient", but it's fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you're sentient, if you can "I think, therefore I am", you're sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks' ability to mathematically reason or use logic, you're just moving the goalpost. But at least you moved it far enough to be actually correct?

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 12 hours ago (1 children)

It’s wild, we’re just completely talking past each other at this point! I don’t think I’ve ever gotten to a point where I’m like “it’s blue” and someone’s like “it’s gold” so clearly. And like I know enough to know what I’m talking about and that I’m not wrong (unis are not getting tons of grants to see “if AI can think”, no one but fart sniffing AI bros would fund that (see OP’s requested source is from an AI company about their own model), research funding goes towards making useful things not if ChatGPT is really going through it like the rest of us), but you are very confident in yourself as well. Your mention of information theory leads me to believe you’ve got a degree in the computer science field. The basis of machine learning is not in computer science but in stats (math). So I won’t change my understanding based on your claims since I don’t think you deeply know the basis just the application. The focus on using the “right words” as a gotchya bolsters that vibe. I know you won’t change your thoughts based on my input, so we’re at the age-old internet stalemate! Anyway, just wanted you to know why I decided not to entertain what you’ve been saying - I’m sure I’m in the same boat from your perspective ;)

[–] theunknownmuncher@lemmy.world 1 points 9 hours ago (1 children)

loses the argument "we’re at the age-old internet stalemate!" LMAO

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 4 hours ago

Indeed I did not, we’re at a stalemate because you and I do not believe what the other is saying! So we can’t move anywhere since it’s two walls. Buuuut Tim Apple got my back for once, just saw this now!: https://lemmy.blahaj.zone/post/27197259

I’ll leave it at that, as thanks to that white paper I win! Yay internet points!

[–] BB84@mander.xyz 4 points 6 days ago

Can humans think?

[–] DarkDarkHouse@lemmy.sdf.org 4 points 6 days ago (1 children)

Do LLMs not exhibit emergent behaviour? But who am I, a simple skin-bag of chemicals, to really say.

They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say

[–] whaleross@lemmy.world 7 points 6 days ago (2 children)

People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 3 days ago (1 children)

So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

[–] whaleross@lemmy.world 1 points 3 days ago

Well, on the other hand. Meat bags can't really do neuron stuff either, despite that is essential for any meat bag operation. Humans are still here though and so are dogs.

load more comments (47 replies)