this post was submitted on 09 Aug 2025
555 points (97.1% liked)
Technology
73878 readers
3662 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I ran the tests with the thinking model. It got them right. For these kind of tasks, choosing the thinking model is key,
Ugh... can we all just stop for a moment to acknowledge how obnoxious this branding is? They've already corrupted the term "AI" to the point of being completely meaningless, are they going to remove all meaning from the word "thinking" now too?
Did they? Afaik, LLMs are an application of AI that falls under natural language processing. It's like calling a rhombus a geometric shape because that's what it is. And this usage goes back decades to, for example, A* pathfinding algorithms and hard-coded decision trees for NPCs.
E: Downvotes for what, disagreeing? At least point out why I'm wrong if you think you know better.
I think the problem stems from how LLMs are marketed to, and perceived by the public. They are not marketed as: this is a specific application of this-or-that AI or ML technology. They are marketed as "WE HAVE AI NOW!", and the general public who is not familiar with AI/ML technologies equates this to AGI, because that's what they know from the movies. The promotional imagery that some of these companies put out, with humanoid robots that look like they came straight out of Ex Machina doesn't help either.
And sure enough, upon first contact, an LLM looks like a duck and quacks like a duck ... so people assume it is a duck, but they don't realize that it's a cardboard model of a duck with a taperecorder inside that plays back quacking sounds.