this post was submitted on 01 Jun 2025
274 points (96.3% liked)

Technology

71083 readers
3302 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I found the aeticle in a post on the fediverse, and I can't find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don't "know" how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

you are viewing a single comment's thread
view the rest of the comments
[–] glizzyguzzler@lemmy.blahaj.zone 68 points 6 days ago (7 children)

Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

[–] peoplebeproblems@midwest.social 20 points 6 days ago (2 children)

People don't understand what "model" means. That's the unfortunate reality.

[–] adespoton@lemmy.ca 14 points 6 days ago (1 children)

They walk down runways and pose for magazines. Do they reason? Sometimes.

[–] IncogCyberspaceUser@lemmy.world 11 points 6 days ago (1 children)

Yeah. That's because peoples unfortunate reality is a "model".

[–] WolfLink@sh.itjust.works 6 points 6 days ago (1 children)

The environmental toll doesn’t have to be that bad. You can get decent results from single high-end gaming GPU.

You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.

Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp

[–] Treczoks@lemmy.world 10 points 6 days ago (1 children)

I've read that article. They used something they called an "MRI for AIs", and checked e.g. how an AI handled math questions, and then asked the AI how it came to that answer, and the pathways actually differed. While the AI talked about using a textbook answer, it actually did a different approach. That's what I remember of that article.

But yes, it exists, and it is science, not TicTok

[–] lgsp@feddit.it 5 points 6 days ago

Thank you. I found the article, linkin the OP

[–] AnneBonny@lemmy.dbzer0.com 7 points 6 days ago (1 children)

How would you prove that someone or something is capable of reasoning or thinking?

[–] glizzyguzzler@lemmy.blahaj.zone 6 points 6 days ago (2 children)

You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

[–] theunknownmuncher@lemmy.world 19 points 6 days ago* (last edited 6 days ago) (1 children)

Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn't post a relevant or complete thought

Your comment is like saying an audio file isn't really music because it's just a series of numbers.

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 6 days ago* (last edited 6 days ago) (3 children)

Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

“An LLM model isn’t really an LLM because it’s just a series of numbers”

But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

[–] theunknownmuncher@lemmy.world 5 points 6 days ago* (last edited 6 days ago) (1 children)

LOL you didn't really make the point you thought you did. It isn't an "improper comparison" (it's called a false equivalency FYI), because there isn't a real distinction between information and this thing you just made up called "basic action on data", but anyway have it your way:

Your comment is still exactly like saying an audio pipeline isn't really playing music because it's actually just doing basic math.

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 3 days ago (1 children)

I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

[–] theunknownmuncher@lemmy.world 0 points 3 days ago* (last edited 2 days ago) (1 children)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

Incorrect. You might want to take an information theory class before speaking on subjects like this.

I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.

Lmao yup totally, it's not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it's a dead research field because it's already "settled". (You're wrong 🤭)

LLMs are just tools not sentient or verging on sentient

Correct. No one claimed they are "sentient" (you actually mean "sapient", not "sentient", but it's fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you're sentient, if you can "I think, therefore I am", you're sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks' ability to mathematically reason or use logic, you're just moving the goalpost. But at least you moved it far enough to be actually correct?

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 9 hours ago (1 children)

It’s wild, we’re just completely talking past each other at this point! I don’t think I’ve ever gotten to a point where I’m like “it’s blue” and someone’s like “it’s gold” so clearly. And like I know enough to know what I’m talking about and that I’m not wrong (unis are not getting tons of grants to see “if AI can think”, no one but fart sniffing AI bros would fund that (see OP’s requested source is from an AI company about their own model), research funding goes towards making useful things not if ChatGPT is really going through it like the rest of us), but you are very confident in yourself as well. Your mention of information theory leads me to believe you’ve got a degree in the computer science field. The basis of machine learning is not in computer science but in stats (math). So I won’t change my understanding based on your claims since I don’t think you deeply know the basis just the application. The focus on using the “right words” as a gotchya bolsters that vibe. I know you won’t change your thoughts based on my input, so we’re at the age-old internet stalemate! Anyway, just wanted you to know why I decided not to entertain what you’ve been saying - I’m sure I’m in the same boat from your perspective ;)

[–] theunknownmuncher@lemmy.world 1 points 6 hours ago (1 children)

loses the argument "we’re at the age-old internet stalemate!" LMAO

Indeed I did not, we’re at a stalemate because you and I do not believe what the other is saying! So we can’t move anywhere since it’s two walls. Buuuut Tim Apple got my back for once, just saw this now!: https://lemmy.blahaj.zone/post/27197259

I’ll leave it at that, as thanks to that white paper I win! Yay internet points!

[–] BB84@mander.xyz 4 points 6 days ago

Can humans think?

[–] DarkDarkHouse@lemmy.sdf.org 4 points 6 days ago (1 children)

Do LLMs not exhibit emergent behaviour? But who am I, a simple skin-bag of chemicals, to really say.

They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say

[–] whaleross@lemmy.world 7 points 6 days ago (2 children)

People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 3 days ago (1 children)

So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

[–] whaleross@lemmy.world 1 points 3 days ago

Well, on the other hand. Meat bags can't really do neuron stuff either, despite that is essential for any meat bag operation. Humans are still here though and so are dogs.

It's a developer option that isn't generally available on consumer-facing products. It's literally just a debug log that outputs the steps to arrive at a response, nothing more.

It's not about novel ideation or reasoning (programmatic neural networks don't do that), but just an output of statistical data that says "Step was 90% certain, Step 2 was 89% certain...etc"

[–] AnneBonny@lemmy.dbzer0.com 2 points 6 days ago (2 children)

Who has claimed that LLMs have the capacity to reason?

[–] theparadox@lemmy.world 12 points 6 days ago (2 children)

More than enough people who claim to know how it works think it might be "evolving" into a sentient being inside it's little black box. Example from a conversation I gave up on... https://sh.itjust.works/comment/18759960

[–] theunknownmuncher@lemmy.world 6 points 6 days ago

I don't want to brigade, so I'll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to "show it's work" or explain it's reasoning. The text response of an LLM cannot be taken at it's word or used to confirm that kind of theory. It requires tracing the logic under the hood.

Just like how it's not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It's output is always "fake"

That doesn't mean there isn't a real potential element of self preservation, though, but you'd need to dig and trace through the network to show it, not use the text output.

[–] AnneBonny@lemmy.dbzer0.com 2 points 6 days ago

Maybe I should rephrase my question:

Outside of comment sections on the internet, who has claimed or is claiming that LLMs have the capacity to reason?

[–] adespoton@lemmy.ca 6 points 6 days ago (1 children)

The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.

The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.

[–] theunknownmuncher@lemmy.world 5 points 6 days ago (1 children)

You're confusing the confirmation that the LLM cannot explain it's under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.

[–] adespoton@lemmy.ca 1 points 6 days ago (1 children)

No, they really don’t. It’s a large language model. Input cues instruct it as to which weighted path through the matrix to take. Those paths are complex enough that the human mind can’t hold all the branches and weights at the same time. But there’s no planning going on; the model can’t backtrack a few steps, consider different outcomes and run a meta analysis. Other reasoning models can do that, but not language models; language models are complex predictive translators.

[–] theunknownmuncher@lemmy.world 8 points 6 days ago (1 children)

To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with "grab it"), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.

Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.

🙃 actually read the research?

No, they’re right. The “research” is biased by the company that sells the product and wants to hype it. Many layers don’t make think or reason, but they’re glad to put them in quotes that they hope peeps will forget were there.

[–] theunknownmuncher@lemmy.world -1 points 6 days ago* (last edited 6 days ago) (3 children)

It's true that LLMs aren't "aware" of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like "they can never reason" is provably false.

Its obvious that you have a bias and desperately want reality to confirm it, but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

EDIT: lol you can downvote me but it doesn't change evidence based research

It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.

[–] glizzyguzzler@lemmy.blahaj.zone 12 points 6 days ago (1 children)

Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain't smart, and they ain’t worth destroying the planet.

[–] ohwhatfollyisman@lemmy.world 6 points 6 days ago (1 children)

but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

would there be a source for such research?

[–] theunknownmuncher@lemmy.world 8 points 6 days ago (1 children)
[–] ohwhatfollyisman@lemmy.world 7 points 6 days ago (1 children)

but this article espouses that llms do the opposite of logic, planning, and reasoning?

quoting:

Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,

are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?

[–] theunknownmuncher@lemmy.world 6 points 6 days ago

No, you're misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that "sounds" right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can't ask an LLM to explain its own reasoning. But the article shows how they've made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just "autocomplete"