this post was submitted on 18 Feb 2026
898 points (99.3% liked)

Technology

81451 readers
4409 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bluGill@fedia.io 12 points 1 day ago (5 children)

I've been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don't do that hard work the ai is going to overwhelm the reviewers with garbage

[–] actionjbone@sh.itjust.works 49 points 1 day ago (2 children)

So, what you're saying is, you're not writing code.

[–] bluGill@fedia.io 19 points 1 day ago (3 children)

I'm writing code because it is often faster than explaining to the ai how to do it. I'm spending this month seeing what ai can do - it ranges from saving me a lot of tedious effort to making a large mess to clean up

[–] LedgeDrop@lemmy.zip 10 points 1 day ago

I've had better success, when using AI agents in repeated, but small and narrow doses.

It's been kinda helpful in brainstorming interfaces (and I always have to append at the end of every statement "... in the most maintainable way possible.")

It's been really helpful in writing unit tests (I follow Test Driven Development), and sometimes it picks up edge cases I would have overlooked.

I wouldn't blindly trust any of it, as all too often it's happy to just disregard any sort of error handling (unless explicitly mentioned, after the fact). It's basically like being paired up with an over-eager, under-qualified junior developer.

But, yeah, you're gonna have a bad time if you prompt it to "write me a Unix operating system in web assembly".

[–] Thorry@feddit.org 6 points 1 day ago (2 children)

I totally get it. I've been critical about using AI for code purposes at work and have pleaded to stop using it (management is forcing it, less experienced folk want it). So I've been given a challenge by one of the proponents to use a very specific tool. This one should be one of the best AI slop generators out there.

So I spent a lot of time thoroughly writing specs for a task in a way the tool should be able to do it. It failed miserably, didn't even produce any usable result. So I asked the dude that challenged me to help me refine the specs, tweak the tool, make everything perfect. The thing still failed hard. It was said it was because I was forcing the tool into decisions it couldn't handle and to give it more freedom. So we did that, it made up the rules themselves and subsequently didn't follow those rules. Another failure. So we split up the task into smaller pieces, it still couldn't handle it. So we split it up even further, to a ridiculous level, at which point it would definitely be faster just to create the code manually. It's also no longer realistic, as we pretty much have the end result all worked out and are just coaching the tool to get there. And even then it's making mistakes, having to be corrected all the time, not following specs, not following code guidelines or best practices. Another really annoying thing is it keeps on changing code it shouldn't touch, since we've made the steps so small, it keeps messing up work it did previously. And the comments it creates are crazy, either just about every line has a comment attached and functions get a whole story, or it has zero comments. As soon as you say to limit the comments to where they are useful, it just deletes all the comments, even the ones it put in before or we put in manually.

I'm ready to give up on the thing and have the use of AI tools for coding limited if not outright stopped entirely. But I'll know how that discussion will go: Oh you used tool A? No, you should be using tool B, it's much better. Maybe the tools aren't there now, but they are getting better all the time, so we'll benefit any day now.

When I hear even experienced devs be enthusiastic about AI tools, I really feel like I'm going crazy. They suck a lot and aren't useful at all (on top of the thousand other issues with AI), why are people liking it? And why have we hedged the entire economy on it?

[–] mcv@lemmy.zip 7 points 1 day ago (2 children)

I've started using it as an interactive rubber duck. When I've got a problem, I explain it to the AI, after which it gives a response that I ignore because after explaining it, I figured it out myself.

AI has been very helpful for finding my way around Azure deploy problems, though. And other complex configuration issues (I was missing a certificate to use az login). I fixed problems I probably couldn't have solved without it.

But I've lost a lot of time trying to get it to solve complex coding problems. It makes a heroic effort trying to combine aspects of known patterns and algorithms into something resembling a solution, and it can "reason" about how it should work, but it doesn't really understand what it's doing.

[–] addie@feddit.uk 3 points 1 day ago

Which is strange, because Azure's documentation is complete dogshit.

We were trying to solve something at work (send SMTP messages using OAuth authentication, not rocket science) and Azure's own chatbot kept on making up non-existent server commands, rest endpoints that don't exist, and phantom permissions that needed to be added to the account.

Seriously; fuck Azure, fuck Copilot. Made a task that should have taken hours, take weeks.

[–] regedit@lemmy.zip 2 points 23 hours ago* (last edited 23 hours ago)

after explaining it, I figured it out myself.

I use colleagues or people on Discord for this. I get the solution immediately after asking AND those that saw me, or heard me, ask now think I'm an idiot. It's my neurodivergent kink!

[–] prex@aussie.zone 3 points 1 day ago (1 children)

Nah bro U just prompting wrong trust me bro just one more tool.

/S

[–] oftenawake@lemmy.dbzer0.com 0 points 18 hours ago

Let's get tool B to fix the code from tool A bro, it'll work bro trust me! /s

[–] jbloggs777@discuss.tchncs.de 3 points 1 day ago

You will need more than a month to figure out what its good for and what not, and to learn how to effectively utilize it as a tool.

If can properly state a problem, outline the approach I want, and can break it down into testable stages, it can be an accelerator. If not, it's often slop.

The most valuable time is up front design and planning, and learning how to express it. Next up is the ability to quickly make judgement calls, and to backtrack without getting bogged down.

[–] bufalo1973@piefed.social -1 points 1 day ago

Maybe it's work and it's required 🤷‍♂️

[–] Peehole@piefed.social 5 points 1 day ago* (last edited 1 day ago) (1 children)

With proper prompting you can let it do a lot of annoying stuff like refactors reasonably well. With a very strict linter you can avoid the most stupid mistakes and shortcuts. If I work on a more complex PR it can take me a couple days to plan it correctly and the actual implementation of the correct plan will take no time at all.

I think for small bug fixes on a maintainable codebase it works, and it works for writing plans and then implementing them. But I honestly don’t know if it’s any faster than just writing the code myself, it‘s just different.

[–] fuck_u_spez_in_particular@lemmy.world 4 points 22 hours ago (1 children)

reasonably well

hmm not in my experience, if you don't care about code-quality you can quickly prototype slop, and see if it generally works, but maintainable code? I always fall back to manual coding, and often my code is like 30% of the length of what AI generates, more readable, efficient etc.

If you constrain it a lot, it might work reasonably, but then I often think, that instead of writing a multi-paragraph prompt, just writing the code might've been more effective (long-term that is).

plan it correctly and the actual implementation of the correct plan will take no time at all.

That's why I don't think AI really helps that much, because you still have to think and understand (at least if you value your product/code), and that's what takes the most time, not typing etc.

it‘s just different.

Yeah it makes you dumber, because you're tempted to not think into the problem, and reviewing code is less effective in understanding what is going on within code (IME, although I think especially nowadays it's a valuable skill to be able to review quickly and effectively).

[–] Peehole@piefed.social 2 points 20 hours ago

Eh I don’t disagree with you, it’s just the reality for me that I am now expected to work on much more stuff at the same time because of AI, it’s exhausting but at least in my job I have no choice and I try to arrange myself with the situation.

I sure lost a lot of understanding of the details of the codebase but I do read every line of code these LLMs spit out and manually review all PRs for obvious bullshit. I also think code quality got worse despite me doing everything I can to keep it decent.

[–] resipsaloquitur@lemmy.world 4 points 1 day ago

Sounds like that couple that kept rescuing cats that were promptly eaten by coyotes.

[–] sefra1@lemmy.zip 3 points 1 day ago (1 children)

Not sure why you're getting down votes, AI is a good tool when used properly.

[–] RalfWausE@feddit.org -2 points 1 day ago (1 children)

Its not, its an abomination that should be wiped of the face of this earth and its shills should be shunned

[–] SCmSTR@lemmy.blahaj.zone 1 points 10 hours ago (1 children)

I mean, yes, but also that's a bit nuclear. Machine learning has real, fully good ethical and responsible uses... The problem is that society has yet to agree on the philosophy of what that is, and most business-first minded people have SUPER shitty, or even completely missing moral compasses.

So, effectively what you say, yes. But technically with much nuance and many clauses, not entirely.

We are clearly not ready as a species to handle it. Though, maybe we'll burn the shit out of our hands in the next coming century enough to learn. But either way, it's DEFINITELY not an "ignore all risk and run blindly at this shiny new flame" thing like a lot of people seem to think and treat it.

[–] RalfWausE@feddit.org 2 points 1 hour ago

The thing is: "AI" can be a useful tool in the hands of an competent programmer, media creator and so forth... BUT it is literally the dark side of the force. Just to bring in the Yoda quote:

Luke: ... Is the dark side stronger?

Yoda: No, no, no. Quicker, easier, more seductive.

The problem it allows a horde of fools to create software that is - at best - dysfunctional and at worst really dangerous. While, yes, it always was possible to fake photographs and create false video evidence of events it either required to have money, knowledge or both. Now any person can - with nearly no training - create realistic looking pictures and videos leading to god knows what.

And don't let me get into the environmental aspects of this technology...

Perhaps, some day in the future when the hype is gone (and hopefully most of the shitty people pushing it) it might be possible to use this technology in the right way... but this hype and push for the usage of that technology will not go away until we push at least as hard against it as the proponents push towards it.