this post was submitted on 18 Feb 2026
881 points (99.3% liked)
Technology
81451 readers
4413 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Before hitting submit I'd worry I've made a silly mistake which would make me look a fool and waste their time.
Do they think the AI written code Just Works (TM)? Do they feel so detached from that code that they don't feel embarrassment when it's shit? It's like calling yourself a fictional story writer and writing "written by (your name)" on the cover when you didn't write it, and it's nonsense.
Nowadays people use OpenClaw agents which don't really involve human input beyond the initial "fix this bug" prompt. They independently write the code, submit the PR, argue in the comments, and might even write a hit piece on you for refusing to merge their code.
AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.
Somebody comes along saying "Oh look at the image is just genera..." and you cut them with "looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?"
Not good enough, you need to poison the data
I don't want my data poisoned, I'd rather just poison the AI bros.
Yeah but then their Facebook accounts will keep producing slop even after they're gone.
Tempting, but even that is not good enough as another reply pointed out
the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data
From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.
Doesn't someone have to review those submissions before they're published?
You guys, it's almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon's playbook by flooding the zone with slop.
at least with foss the horseshit is being done in public.
Can Cloudflare help prevent this?
LLM code generation is the ultimate dunning Kruger enhancer. They think they're 10x ninja wizards because they can generate unmaintainable demos.
They’re not going to maintain it - they’ll just throw it back to the LLM and say “enhance”.
Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn't exist, or it'll use a face of someone in the training set and they go after the wrong person.
Either way I have a feeling they'll he some ENHANCE failure episode due to AI.
yes.
literally yes.
It's insane
That's how you know who never even tried to run the code.
Reminds me of one job I had where my boss asked shortly after starting there if their entry test was too hard. They had gotten several submissions from candidates that wouldn't even run.
I envision these types of people are now vibe coding.
Super lazy job applications… can’t even bother to put two minutes into vibing.
that's the annoying part.
LLM code can range to "doesn't even compile" to "it actually works as requested".
The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From "let's abstract this with 5 new layers" to "I'm going to refactor that whole class of objects to get this simple method in there".
The requested feature might actually work. 100%.
It's just very possible that it either broke other stuff, or made the codebase less maintainable.
That's why it's important that people actually know the codebase and know what they/the model are doing. Just going "works for me, glhf" is not a good way to keep a maintainable codebase
LOL. So true.
On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.
That’s a fun little mini-game that comes with vibe coding.
I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn't come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.