this post was submitted on 23 Oct 2025
73 points (92.9% liked)

Technology

76415 readers
2622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 12 comments
sorted by: hot top controversial new old
[–] Gawdl3y@pawb.social 42 points 1 week ago* (last edited 1 week ago) (3 children)

The alternative here is they don't allow it and get a bunch of MRs sneakily using AI anyway but not disclosing it. I'd rather be aware that an MR was made with AI than not, personally, so I think this is probably the right move.

[–] kayohtie@pawb.social 14 points 1 week ago

I hate that this is the most accurate answer almost certainly. Maybe it'll shame people into not submitting more often than it would've for people sneaking it in.

[–] EncryptKeeper@lemmy.world 3 points 1 week ago (1 children)

I mean also shouldn’t somebody be reviewing these MRs? I’m an infra guy not a programmer but doesn’t it like, not really matter how the code in the MR was made as long as it’s reviewed and validated?

[–] calcopiritus@lemmy.world 5 points 1 week ago

The problem with that is that reviewing takes time. Valuable maintainer time.

Curl faced this issue. Hundreds of AI slop "security vulnerabilities" were submitted to curl. Since they are security vulnerabilities, they can't just ignore them, they had to read every one of them, only to find out they weren't real. Wasting a bunch of time.

Most of the slop was basically people typing into chatgpt "find me a security vulnerability of a project that has a bounty for finding one" and just copy-pasting whatever it said in a bug report.

With simple MRs at least you can just ignore the AI ones an priorize the human ones if you don't have enough time. But that will just lead to AI slop not being marked as such in order to skip the low-prio AI queue.

[–] tabular@lemmy.world 2 points 1 week ago (1 children)

If one wants to avoid software with AI code then being aware which MRs need replacing helps. However, accepting it encourages it more and makes it less fesible that you could prune all the MRs written in part by AI. Disclosing it will become worthless if it becomes the norm.

[–] Attacker94@lemmy.world 1 points 1 week ago

If the code is good I don't have an issue with it being merged even if ai was used, that being said I bet the obvious outcome is that either people ignore the policy and nothing changes or they comply and most reviewers focus on the non-ai group which is how it was before ai. All in all, this decision can never hurt the development, since as far as I am aware there is no requirement to review an MR.

[–] 14th_cylon@lemmy.zip 14 points 1 week ago
[–] calcopiritus@lemmy.world 12 points 1 week ago (1 children)

I hope they are prepare for the AI slop DDoS. Curl wasn't, and they didn't even state they would welcome AI contributions.

[–] WhyJiffie@sh.itjust.works 1 points 1 week ago

they can just deprioritize AI MRs if it's tagged so

[–] phoenixz@lemmy.ca 6 points 1 week ago

As long as it's properly tagged so we can avoid the hell out of it

[–] JoeTheSane@lemmy.world 4 points 1 week ago

Oh, come the fuck on…