this post was submitted on 21 Nov 2025
274 points (96.0% liked)

Technology

76949 readers
3928 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pelespirit@sh.itjust.works 142 points 13 hours ago (2 children)

It says it will finish the code, it doesn't say the code will work.

[–] Thorry@feddit.org 49 points 8 hours ago (4 children)

Also just because the code works, doesn't mean it's good code.

I've had to review code the other day which was clearly created by an LLM. Two classes needed to talk to each other in a bit of a complex way. So I would expect one class to create some kind of request data object, submit it to the other class, which then returns some kind of response data object.

What the LLM actually did was pretty shocking, it used reflection to get access from one class to the private properties with the data required inside the other class. It then just straight up stole the data and did the work itself (wrongly as well I might add). I just about fell of my chair when I saw this.

So I asked the dev, he said he didn't fully understand what the LLM did, he wasn't familiar with reflection. But since it seemed to work in the few tests he did and the unit tests the LLM generated passed, he thought it would be fine.

Also the unit tests were wrong, I explained to the dev that usually with humans it's a bad idea to have the person who wrote the code also (exclusively) write the unit tests. Whenever possible have somebody else write the unit tests, so they don't have the same assumptions and blind spots. With LLMs this is doubly true, it will just straight up lie in the unit tests. If they aren't complete nonsense to begin with.

I swear to the gods, LLMs don't save time or money, they just give the illusion they do. Some task of a few hours will take 20 min and everyone claps. But then another task takes twice as long and we just don't look at that. And the quality suffers a lot, without anyone really noticing.

[–] Kissaki@feddit.org 2 points 31 minutes ago

So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection.

Big baffling facepalm moment.

If they would at least prefix the changeset description with that it'd be easier to interpret and assess.

[–] criss_cross@lemmy.world 1 points 34 minutes ago

They’ve been great for me at optimizing bite sized annoying tasks. They’re really bad at doing anything beyond that. Like astronomically bad.

[–] Pieisawesome@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

Why would unit tests not be written by the same person? That doesn’t make a lot of sense…

[–] Kissaki@feddit.org 2 points 30 minutes ago* (last edited 29 minutes ago)

They did say why they're doing it

Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots.

Did that not make sense to you?

I usually wouldn't do that, because it's a bigger investment. But it certainly makes logical sense to me and is something teams can weigh and decide on.

[–] airgapped@piefed.social 7 points 5 hours ago

Great description of a problem I noticed with most LLM generated code of any decent complexity. It will look fantastic at first but you will be truly up shit creek by the time you realise it didn't generate a paddle.

[–] TORFdot0@lemmy.world 9 points 10 hours ago (1 children)

I was going to say. The code won’t compile but it will be “finished “

[–] WaitThisIsntReddit@lemmy.world 7 points 10 hours ago (1 children)

A couple agent iterations will compile. Definitely won't do what you wanted though, and if it does it will be the dumbest way possible.

[–] TORFdot0@lemmy.world 7 points 10 hours ago* (last edited 10 hours ago) (1 children)

Yeah you can definitely bully AI into giving you some thing that will run if you yell at it long enough. I don’t have that kind of patience

Edit: typically I see it just silently dump errors to /dev/null if you complain about it not working lol

[–] Darkenfolk@sh.itjust.works 1 points 8 hours ago

And people say that AI isn't humanlike. That's peak human behavior right there, having to bother someone out of procrastination mode.

The edit makes it even better, swiping things under the rug? Hell yeah!