this post was submitted on 12 Apr 2026
198 points (92.7% liked)

Technology

83725 readers
2305 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] null@lemmy.org 12 points 1 hour ago (1 children)

Ah, the solution that recognizes there's no way to eliminate AI from the supply chain after it's already been introduced.

[–] sunbeam60@feddit.uk 2 points 24 minutes ago* (last edited 24 minutes ago)

You make it sound as if there was another choice if just people had better principles. Prey tell us, what would you have done, now. Not in the past, now.

[–] menas@lemmy.wtf 3 points 1 hour ago

Ecological, social, economic issues and the answer is on the legal site. FOSS as usual I guess

[–] 404found@lemmy.zip 7 points 2 hours ago (1 children)

I don't understand the full picture here, but the person who is submitting AI slop will be held accountable. Never a company.

So if a company is pushing staff to us AI to complete projects faster and their code ends up being AI slop when submitted, only the person working for the company will be held responsible.

I'm not sure what the repercussions are here but hopefully it's not a large fine. Those fines could add up quick if the person is submitting code all the time and doesn't know they are messing up.

[–] Wispy2891@lemmy.world 14 points 2 hours ago (1 children)

Which fines, this is just an internal rule in an organization.

At most can be rightfully banned from contributing

It someone is contributing with code that doesn't really understand, then shouldn't contribute

[–] 404found@lemmy.zip 3 points 2 hours ago

Ah okay got it now. Thanks. I didn't understand it all the way. My comment is irrelevant

[–] catlover@sh.itjust.works 26 points 3 hours ago (4 children)

I'd still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn't even read the code, and i have to go through all the slop

[–] jj4211@lemmy.world 1 points 41 minutes ago

I suspect the answer will be that such large requested as you frequently see with LLM codegen will just be rejected.

Already I see changes broken up and suggested bit by bit, so I presume the same best practice applies.

[–] kcuf@lemmy.world 8 points 2 hours ago (1 children)

Ya I'm finding myself being the bad code generator at work as I'm scattered across so many things at the moment due to attrition and AI can do a lot of the boilerplate work, but it's such a time and energy sink to fully review what it generates and I've found basic things I missed that others catch and shows the sloppiness. I usually take pride in my code, but I have no attachment to what's generated and that's exposing issues with trying to scale out using this

[–] Repelle@lemmy.world 7 points 2 hours ago* (last edited 1 hour ago) (1 children)

Same. There’s reduction in workforce, pressure to move faster, and no good way to do that without sloppiness. I have never been this down on the industry before; it was never great, but now it’s terrible.

[–] Danitos@reddthat.com 1 points 25 minutes ago

Some thought I had the other day :LLM is supposed to make us more productive, say by 20%. Have you won a 20% pay rise since you adopted it? I haven't

load more comments (2 replies)
[–] Blue_Morpho@lemmy.world 114 points 5 hours ago (2 children)

The title of the article is extraordinary wrong that makes it click bait.

There is no "yes to copilot"

It is only a formalization of what Linux said before: All AI is fine but a human is ultimately responsible.

" AI agents cannot use the legally binding "Signed-off-by" tag, requiring instead a new "Assisted-by" tag for transparency"

The only mention of copilot was this:

"developers using Copilot or ChatGPT can't genuinely guarantee the provenance of what they are submitting"

This remains a problem that the new guidelines don't resolve. Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.

[–] marlowe221@lemmy.world 26 points 4 hours ago* (last edited 4 hours ago) (1 children)

Yeah, that’s also my question. Partially because I am a former-lawyer-turned-software-developer… but, yeah. How are the kernel maintainers supposed to evaluate whether a particular PR contains non-GPL code?

Granted, this was potentially an issue before LLMs too, but nowhere near the scale it will be now.

(In the interests of full disclosure, my legal career had nothing to do with IP law or software licensing - I did public interest law).

[–] stsquad@lemmy.ml 10 points 4 hours ago

They don't, just like they don't with human submitted stuff. The point of the Signed-off-by is the author attests they have the rights to submit the code.

[–] anarchiddy@lemmy.dbzer0.com 9 points 4 hours ago

Yup.

I would also just point out that this doesnt change the legal exposure to the Linux kernel to infringing submissions from before the advent of LLMs.

[–] theherk@lemmy.world 105 points 6 hours ago (2 children)

Seems like a reasonable approach. Make people be accountable for the code they submit, no matter the tools used.

[–] ell1e@leminal.space 16 points 5 hours ago (1 children)

If the accountability cannot be practically fulfilled, the reasonable policy becomes a ban.

What good is it to say "oh yeah you can submit LLM code, if you agree to be sued for it later instead of us"? I'm not a lawyer and this isn't legal advice, but sometimes I feel like that's what the Linux Foundation policy says.

[–] ViatorOmnium@piefed.social 28 points 4 hours ago (5 children)

But this was already the case. When someone submitted code to Linux they always had to assume responsibility for the legality of the submitted code, that's one of the points of mandatory Signed-off-by.

load more comments (5 replies)
[–] hperrin@lemmy.ca 3 points 3 hours ago (2 children)

No, it’s not a reasonable approach. Make people be the authors of the code they submit is reasonable, because then it can be released under the GPL. AI generated code is public domain.

[–] ziproot@lemmy.ml 3 points 1 hour ago

Isn’t that the rule? The author has to be a human?

The new guidelines mandate that AI agents cannot use the legally binding "Signed-off-by" tag, requiring instead a new "Assisted-by" tag for transparency. Ultimately, the policy legally anchors every single line of AI-generated code and any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.

[–] theherk@lemmy.world 4 points 3 hours ago

I suppose there should be no code generators, assemblers, compilers, linkers, or lsp’s then either? Just etching 1’s and 0’s?

[–] hperrin@lemmy.ca 9 points 3 hours ago (3 children)

This is a bad move. The GPL license cannot be enforced on AI generated code.

[–] terabyterex@lemmy.world 3 points 2 hours ago

Thats not true. The new article being shoved down lemmy's throat is not correct. They site court cases and come to bad conclusions

load more comments (2 replies)
[–] 0ndead@infosec.pub 31 points 5 hours ago (3 children)

“Yes to Copilot, no to AI slop”

Pick One

[–] truthfultemporarily@feddit.org 10 points 5 hours ago (11 children)

Where does slop start? If you use auto complete and it is just adding a semicolon or some braces, is it slop? Is producing character by character what you would have wrote yourself slop?

How about using it for debugging?

[–] hperrin@lemmy.ca 7 points 3 hours ago

You don’t need AI to autocomplete code. We’ve had autocomplete for over 30 years.

[–] ell1e@leminal.space 6 points 5 hours ago* (last edited 5 hours ago)

If you would have written it yourself the same way, why not write it yourself? (And there was autocomplete before the age of LLMs, anyway.)

The big problems start with situations where it doesn't match what you would have written, but rather what somebody else has written, character by character.

load more comments (9 replies)
[–] femtek@lemmy.blahaj.zone 9 points 5 hours ago

I mean I don't use copilot but a self hosted Claude at work for debugging and creating templates. I still run thru and test it. I'm only doing crossplane, kyverno, kubernetes infra things though and I started without it so I have an understanding. Now running their someone's crossplane composition written in go and I asked them about this error and he just said get the AI to fix it was worrying since his last day is next week.

load more comments (1 replies)
[–] ell1e@leminal.space 20 points 6 hours ago* (last edited 5 hours ago) (6 children)

Ultimately, the policy legally anchors every single line of AI-generated code

How would that even be possible? Given the state of things:

https://dl.acm.org/doi/10.1145/3543507.3583199

Our results suggest that [...] three types of plagiarism widely exist in LMs beyond memorization, [...] Given that a majority of LMs’ training data is scraped from the Web without informing content owners, their reiteration of words, phrases, and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both the size of LMs and their training data increase, [...] Plagiarized content can also contain individuals’ personal and sensitive information.

https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/

Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books. [...] This phenomenon has been called “memorization,” and AI companies have long denied that it happens on a large scale. [...]The Stanford study proves that there are such copies in AI models, and it is just the latest of several studies to do so.

https://www.twobirds.com/en/insights/2025/landmark-ruling-of-the-munich-regional-court-(gema-v-openai)-on-copyright-and-ai-training

The court confirmed that training large language models will generally fall within the scope of application of the text and data mining barriers, [...] the court found that the reproduction of the disputed song lyrics in the models does not constitute text and data mining, as text and data mining aims at the evaluation of information such as abstract syntactic regulations, common terms and semantic relationships, whereas the memorisation of the song lyrics at issue exceeds such an evaluation and is therefore not mere text and data mining

https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7

In this work we explored the relationship between discourse quality and memorization for LLMs. We found that the models that consistently output the highest-quality text are also the ones that have the highest memorization rate.

https://arxiv.org/abs/2601.02671

recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures [...]. We investigate this question [...] our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs.

How does merely tagging the apparently stolen content make it less problematic, given I'm guessing it still won't have any attribution of the actual source (which for all we know, might often even be GPL incompatible)?

But I'm not a lawyer, so I guess what do I know. But even from a non-legal angle, what is this road the Linux Foundation seems to embrace of just ignoring the license of projects? Why even have the kernel be GPL then, rather than CC0?

I don't get it. And the article calling this "pragmatism" seems absurd to me.

load more comments (6 replies)
[–] mesamunefire@piefed.social 7 points 4 hours ago

I hate ai in my kernel....

[–] veniasilente@lemmy.dbzer0.com 5 points 4 hours ago

How is this all supposed to be, when AI code can not be copyrighted and thus those submissions to the Linux kernel can not be eg.: GPLv{number}?

[–] XLE@piefed.social 11 points 5 hours ago (3 children)

This seems like an ill-thought-out decision, especially in a landscape where Linux should be differentiating itself from, and not following Windows.

The titular "slop" just means "bad AI generated code is banned" but the definition of "bad" is as vague as Google's "don't be evil." Good luck enforcing it, especially in an open-source project where people's incentives aren't tied to a paycheck.

Title is also inaccurate regarding CoPilot (the Microsoft brand AI tool), as a comment there mentions

says yes to Copilot

Where in the article does it say that?? The only mention of CoPilot is where it talks about LLM-generated code having unverifiable provenance. Reply

[–] Naich@piefed.world 10 points 5 hours ago

Google's "don't be evil" was like a warrant canary. It didn't need to be precise, it just needed to be there.

[–] avidamoeba@lemmy.ca 7 points 5 hours ago (2 children)

They're already enforcing it. PRs are reviewed and bad ones are rejected all the time.

load more comments (2 replies)
load more comments (1 replies)
[–] twinnie@feddit.uk 8 points 5 hours ago (8 children)

No point getting upset about this, it’s inevitable. So many FOSS programmers work thanklessly for hours and now there’s some tool to take loads of that work away, of course they’re going to use it. I know loads of people complain about it but used responsibly it can take care of so much of the mundane work. I used to spend 10% of my time writing code then 90% debugging it. If I do that 10% then give it to Claude to go over I find it just works.

[–] uuj8za@piefed.social 5 points 3 hours ago (1 children)

but used responsibly

That's like the most incredibly hard part of all of this. Everything is aligned so that you don't use it responsibly. And it's really hard to guard against this.

Just a few days ago, I was pairing with a coworker and he was using Claude to do a bunch of stuff. He didn't check any of it. I thought he was gonna check stuff before pushing stuff... And nope! I said, "Wait, shouldn't we review the changes to make sure they're correct?" And he said, "Nah, it's probably fine. I trust it. Plus, even if it's wrong, we'll just blame the AI and we can just fix it later."

...

Yes, checking the work would have negated all of the "time saved" and he was being a lazy fuck.

People who don't like coding or engineering use this and they are not interested in using this responsibly.

[–] Tiresia@slrpnk.net 1 points 1 hour ago

That's valid for workers in a capitalist system or for capitalists trying to scam people. But why would someone sign their real name to unchecked AI slop for an open source project? It would risk ruining their reputation for little personal gain.

load more comments (7 replies)
[–] treadful@lemmy.zip 4 points 5 hours ago (2 children)

I'm curious how this is going to play out legally for copyright. If you accept AI code, you can't copyright it, so aren't you essentially forfeiting the copyleft license?

load more comments (2 replies)
load more comments
view more: next ›