320
A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data
(theconversation.com)
This is a most excellent place for technology news and articles.
In some cases, it’s people who’ve done the research and written the paper who then use an LLM to give it a final polish. Often, it’s people who are writing in a non-native language.
Doesn’t make it good or right, but adds some context.
Sure, and I'm sympathetic to the baffling difficulties of English, but use Google Translate and ask someone who's more fluent for help with the final polish (as a single suggestion). Trusting your work, trusting science to an LLM is lunacy.
Google translate is using the same approach like an LLM.
https://en.wikipedia.org/wiki/Google_Translate
https://en.wikipedia.org/wiki/Neural_machine_translation
So is DeepL
https://en.wikipedia.org/wiki/DeepL_Translator
And before they were using neural network approaches they used statistical approaches, which are subject to the same errors as a result of bad training data.
Check the results though. Google translate is far far better at translation than a generic LLM.