this post was submitted on 18 Feb 2026
904 points (99.3% liked)
Technology
81451 readers
4531 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What people don’t realize is that AI does not write good code unless you tell it to. I am playing a lot with AI doing the writing, while I give it specific prompts, but even then, very often it changes code that was totally unnecessary. And this is the dangerous part.
I believe the only thing repo owners could do is use AI against AI. Let the blind AI contributors drown in work by constantly telling them to improve the code, and by asking critical questions.
Ohhh, that's what I was missing, just tell it to write good code, of course.
"Okay, ChatGPT. Write me a game that will surpass Metal Gear. And make sure the code is actually good."
It sounds crazy, but it can have impact. It might follow some coding standards it wouldn't otherwise.
But you don't really know. You can also explicitly tell it which coding standards to follow and it still won't.
All code needs to be verified by a human. If you can tell it's AI, it should be rejected. Unless it's a vibe coding project I suppose. They have no standards.
That's the problem with LLMs in general, isn't it? It may give you the perfect answer. It may also give you the perfect sounding answer while being terribly incorrect. Often, the only way to notice is if you knew the answer in the first place.
They can maybe be used to get a first draft for an E-Mail you don't know how to start. Or to write a "funny" poem for the retirement party of Christine from Accounting that makes cringe to death on the spot. Yet people treat them like this hyper competent all-knowing assistant. It's maddening.
Exactly. They're trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.
Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.
So what you're saying is directly contradictory to your previous comment, in fact it doesn't produce good code even when you tell it to.
👍
So what you're saying is in order for "AI" to write good code I need to double check everything it spits out and correct it, effectively doing all the with myself. But sure, tell yourself that it saves any amount of time.
It saves my time. That’s all I need.
And wastes someone else's.
Assume whatever you want. I couldn’t care less.
So just dont use LLMs then. The very issue is that mediocre devs just accept whatever and try to PR that.
Don’t be a mediocre dev.
Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.
It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.
So how would you fight the wave of useless PR’s?
You're absolutely right. I haven't realized that I can just tell it to write good code. Thank you, it changed my life.
What do you man by tell it to write good code?
You mean specific prompts, which you mention afterwards, but as also hit and miss?
An important factor is to be very specific with what you want it to do. For that, many now use a “plan” mode, where you discuss about a change first, adding more and more context. In addition to that MCP and RAG will improve the quality tremendously, if you use them correctly. But even if you are careful and used all the tools possible, the LLM might randomly hallucinate. Maybe it affects only a misplaced semicolon in the code. But if it is happening to the orchestrator-agent, then a whole feature is affected. The worst part: you cannot test this, as in many cases, the code is syntactically correct. Just the implementation is not logical at all.
So even for the bad case, where you have a wrong feature with working code, you can mitigate this by introducing a critic-agent, which is by default critical towards everything unless proven otherwise.
That’s at least how I do it. And it works well. But it took me roughly 2 full months to get to this state. So you cannot compare this to classical “Oh I just open up CODEX and then I have working software”, because that will not work.