this post was submitted on 04 Apr 2025
363 points (88.4% liked)

Technology

69298 readers
3851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] harryprayiv@infosec.pub 185 points 3 weeks ago (51 children)

To understand what's actually happening, Anthropic's researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.

Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it's a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.

This is why LLMs are so patchy at math. (Image credit: Anthropic)

Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.

But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

In other words, not only does the model use a very, very odd method to do the maths, you can't trust its explanations as to what it has just done. That's significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."

Anthropic discovered that their Claude LLM didn't just predict the next word. (Image credit: Anthropic)

Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."

Anywho, there's apparently a long way to go with this research. According to Anthropic, "it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words." And the research doesn't explain how the structures inside LLMs are formed in the first place.

But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don't understand—actually work. And that has to be a good thing.

[–] MudMan@fedia.io 84 points 3 weeks ago (33 children)

Is that a weird method of doing math?

I mean, if you give me something borderline nontrivial like, say 72 times 13, I will definitely do some similar stuff. "Well it's more than 700 for sure, but it looks like less than a thousand. Three times seven is 21, so two hundred and ten, so it's probably in the 900s. Two times 13 is 26, so if you add that to the 910 it's probably 936, but I should check that in a calculator."

Do you guys not do that? Is that a me thing?

[–] mr_satan@lemm.ee 4 points 3 weeks ago (2 children)

72 * 10 + 70 * 3 + 2 * 3

That's what I do in my head if I need an exact result. If I'm approximateing I'll probably just do something like 70 * 15 which is much easier to compute (70 * 10 + 70 * 5 = 700 + 350 = 1050).

[–] MudMan@fedia.io 3 points 3 weeks ago (4 children)

OK, I've been willing to just let the examples roll even though most people are just describing how they'd do the calculation, not a process of gradual approximation, which was supposed to be the point of the way the LLM does it...

...but this one got me.

Seriously, you think 70x5 is easier to compute than 70x3? Not only is that a harder one to get to for me in the notoriously unfriendly 7 times table, but it's also further away from the correct answer and past the intuitive upper limit of 1000.

[–] toynbee@lemmy.world 2 points 3 weeks ago (1 children)

The 7 times table is unfriendly?

I love 7 timeses. If numbers were sentient, I think I could be friends with 7.

[–] MudMan@fedia.io 1 points 3 weeks ago (1 children)

I've always hated it and eight. I can only remember the ones that are familiar at a glance from the reverse table and to this day I sometimes just sum up and down from those "anchor" references. They're so weird and slippery.

[–] toynbee@lemmy.world 3 points 3 weeks ago

Huh.

Going back to the "being friends" thing, I think you and I could be friends due to applying qualities to numbers; but I think it might be challenging because I find 7 and 8 to be two of the best. They're quirky, but interesting.

Thank you for the insight.

[–] Monument@lemmy.sdf.org 2 points 3 weeks ago

See, for me, it’s not that 7*5 is easier to compute than 7*3, it’s that 5*7 is easier to compute than 7*3.

I saw your other comment about 8’s, too, and I’ve always found those to be a pain, so I reverse them, if not outright convert them to arithmetic problems. 8x4 is some unknown value, but X*8 is always X*10-2X, although do have most of the multiplication tables memorized for lower values.
8*7 is an unknown number that only the wisest sages can compute, however.

[–] mr_satan@lemm.ee 1 points 3 weeks ago

Times 5 and times 10 tables are really easy for me. So yeah, in my mind it's an easier comuptation.

That being said having a result of a little over a 1000 gives me an estimate for the magnitude of a number – it's around a thousand. It might be more or less but it's not far from there.

[–] Broadfern@lemmy.world 0 points 3 weeks ago* (last edited 3 weeks ago)

For me personally, anything times 5 can be reached by halving the number, then multiplying that number by 10.

Example: 66 x 5 = Y

  • (66/2) x (5x2) = Y

    • cancel out the division by creating equal multiplication in the other number

    • 66/2 = 33

    • 5x2 = 10

  • 33 x 10 = Y

  • 33 x 10 = 330

  • Y = 330

[–] singletona@lemmy.world 1 points 3 weeks ago (1 children)

(72 * 10) + (2 * 3) = x

There, fixed, because otherwise order of operation gets fucky.

[–] mr_satan@lemm.ee 2 points 3 weeks ago

No it doesn't, multiplication and division always take precedence over addition and subtraction. You'd need parentheses to clarify what is in the divisor since that can be ambiguous with line notation.

load more comments (30 replies)
load more comments (47 replies)