Fighting fire with fire
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
This is not surprising if you've studied anything on machine learning or even just basic statistics. Consider if you are trying to find out the optimal amount of a thickener to add to a paint formulation to get it to flow the amount you want. If you add it at 5%, then 5.1%, then 5.2%, it will he hard to see how much of the difference between those batches is due to randomness or measurement uncertainty than if you see what it does at 0%, then 25% then 50%. This is a principle called Design of Experiments (DoE) in traditional statistics, and a similar effect happens when you are training machine learning models- datapoints far outside the norm increase the ability of the model to predict within the entire model space (there is some nuance here, because they can become over-represented if care isn't taken). In this case, 4chan shows the edges of the English language and human psychology, like adding 0% or 50% of the paint additives rather than staying around 5%.
At least that's my theory. I haven't read the paper but plan to read it tonight when I have time. At first glance I'm not surprised. When I've worked with industrial ML applications, processes that have a lot of problems produce better training data than well controlled processes, and I have read papers on this subject where people have improved performance of their models by introducing (controlled) randomness into their control setpoints to get more training data outside of the tight control regime.
Kinda weird GPT4-Chan wasn't referenced. A guy fine-tuned GPT-J on 4chan, then deployed bots to write posts. I guess it was more of a stunt than academic or scientific, but training on 4chan improved the model's performance on a truthfulness benchmark.
Fresh "AI" pseudo-science for a monday morning.
These grifters never even define "bad/toxic data". It's just 4chan ffs.