this post was submitted on 23 Feb 2026
712 points (97.6% liked)
Technology
82000 readers
3634 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Went to test to google AI first and it says "You cant wash your car at a carwash if it is parked at home, dummy"
Chatgpt and Deepseek says it is dumb to drive cause it is fuel inefficient.
I am honestly surprised that google AI got it right.
They probably added a system guardrail as soon as they heard about this test. it's been going around for a while now :)
I'm pretty sure Google's AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).
As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI's training data.
OTOH, that means it's probably also ingesting a lot of AI generated slop, which causes its own set of problems.
Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of "below 6 right answers". Guess, Gemini is the closest to "intelligence" out of a bunch.
I mean if they fix specific reasoning test answers (like the strawberry one) this doesn't actually make reasoning better tho. It just optimizes for benchmarks
I've been feeding a bunch of documents I wrote into gemini last week to spit out some scripts for validation I couldn't be arsed to write. It's done a surprisingly comprehensive job and when wrong has been nudged right with just a little abuse..
I'm still all fuck this shit and can't wait for the pop, but for comparison openai was utterly brain dead given the same task. I think I actually made the model worse it was so useless.
I didn't get it right until people started taking about it.