this post was submitted on 31 Jul 2025
518 points (98.7% liked)
Not The Onion
17481 readers
1588 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's consistent with our experience using AI "assistants." If it's a common problem, the training set will be large enough that there's a chance the AI will return a correct answer, though without contextual knowledge that might be important. But in a case like that, you could also just go and look it up on Stack Overflow. And if it's not a common problem, the AI-proposed solution is likely to be crap, and one unlikely to take into account nonfunctional requirements, architectural guidelines, maintainability or best practice.
My own principle is that if AI was involved at any step in the coding process, that means we need to test that code even more than usual, because programmers who remain in the business learn not to do stupid things over time, and AI doesn't. When an AI makes some stupid coding suggestion, there's no feedback loop telling it that it fucked up.
That's some sound advice there.