this post was submitted on 13 Dec 2025
569 points (98.0% liked)
Technology
77096 readers
2993 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’ve tried explaining AI to people before and only could get so far before they fall back on “but it’s magic dude” but I love the idea of explaining it as a haunted typewriter.
I use the "very articulated parrot" analogy.
They're systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they're wrong.
I've been programming a lot with AI lately, and I'd say the error rate for moderately complex code is about 50%. They're great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you're trying to do something actually new, they're nearly useless. You can lose a lot of time going down a wrong path, if you're not careful.
Never ever trust them. Always verify.
I'm not one to stump for AI but 2-3 years ago we would have said AI struggled to kick out a working Powershell script and now the error rate for complex scripts is maybe 5%. The tech sped up very fast, and now they're getting runtime environments to test the code they write, memories and project libraries. the tech will continue to improve. In 2026, 2028 are we still going to be saying the same about how AI can't really handle coding or take people's jobs? Quite a bit less. In 2030, less still.
There is a point beyond which no refinements can be made but just looking backward a bit, I don't think we're there yet.
Just in the past few months, I'd say Claude has gotten good enough to let us downsize our team from 3.5 to 2.5 but thankfully no one is interested in doing that.
I use something similar. “Child with enormous vocabulary.”
It can recognize correlations, it understands the words themselves, but it really how those connections or words work.
Some of the more advanced LLMs are getting pretty clever. They're on the level of a temp who talks too much, misses nuance, and takes too much initiative. Also, any time you need them to perform too complex a task, they start forgetting details and then entire things you already told them.
Sounds like they are a liability when you put it that way.
I call dibs on the ghost of Harlan Ellison.
“HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
Glados: “just offer them cake and a fire pit and calm down”