this post was submitted on 23 Mar 2026
258 points (99.6% liked)
Technology
82989 readers
2895 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why should we use (insert tool) when we did just fine before?
Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.
Why fire, berries fine
Try using fire in a library.
wikipedia isn't a library.
Neither is AI fire. 🙄
Call me mad, call me crazy. AI shouldn't be altering databases of knowledge, especially when it is so inconsistent. If there is a question on whether certain words are appropriate why can't you ask another human being, they have forums for a reason, or someone else comes along and fixes it. Or look at a dictionary. The amount of energy spent for dubious information, holy. It's not like there is a shortage of human beings on earth.
https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models
https://en.wikipedia.org/wiki/Wikipedia:LLM-assisted_translation
The two related "policies" are rather short, you should read them if you haven't.
The policy only allows usage as an auto-translater (a task at which they are not worst than old-style auto-translaters that were always allowed) and as spellcheck/grammarcheck (where it is also not worst than other allowed options).
None of those tools were previously seen as altering Wikipedia by themselves. The goal is that LLMs should be used and considered like they were.
To be clear they always were articles for creation submitted from clearly google-translated text, and they always were dismissed as slop. To get an autotranslated article accepted, you need to clean it up until all the information is correct and the grammar is good enough. This is a rather standard workflow for translations. The same thing should apply to LLMs.
The new issue here is that LLMs can "organically" change informations while asked to translate. When a classic autotranslate changes the information, it often (not always) leaves a notable mess in the grammar. LLMs will insert their errors much more cleanly. This is acknowledged by both texts and, well, texts will change if that becomes a reocurring issue.