this post was submitted on 23 Mar 2026
254 points (99.6% liked)

Technology

82989 readers
2895 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] infeeeee@lemmy.zip 189 points 4 hours ago (6 children)

Saved you a click:

After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

[–] RIotingPacifist@lemmy.world 115 points 4 hours ago (3 children)

AIbros: we're creating God!!!

AI users: it can do translation & reformating pretty well but you got to check it's not chatting shit

[–] XLE@piefed.social 2 points 56 minutes ago

I don't think AI users would say it does reformatting either (if they're honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.

[–] halcyoncmdr@piefed.social 22 points 3 hours ago

The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they're asking anyway. All output needs to be verified before being used or relied upon.

The "AI" is just streamlining the process to save time.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

[–] youcantreadthis@quokk.au 1 points 1 hour ago (1 children)

Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.

[–] FauxPseudo@lemmy.world 0 points 17 minutes ago

Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it's appropriate for an article.

[–] errer@lemmy.world 7 points 2 hours ago (3 children)

Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.

I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.

This was only done because the editors pushed to minimize AI involvement. There's a comment here already mentioning that: https://lemmy.world/comment/22826863

[–] Grimy@lemmy.world 8 points 1 hour ago

Sell licenses of what? It's already all in the creative commons iirc.

[–] SuspciousCarrot78@lemmy.world 6 points 1 hour ago

AI already trains on Wikipedia.

https://commoncrawl.org/

Seems pretty reasonable to use it as a grammar checker. As long as it's not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

[–] daychilde@lemmy.world 12 points 3 hours ago

Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.

;-)

[–] ji59@hilariouschaos.com 18 points 4 hours ago

So, it should be used reasonably, as it should have always been.