this post was submitted on 18 Feb 2026
870 points (99.3% liked)
Technology
81451 readers
4479 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I fear when they learn a different layout. Right now it seems they are usually obvious, but soon I wont be able to tell slop from intelligence.
One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn't matter.
That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don't think that any tweaking or fiddling with the model will ever be able to do anything toward what you're describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.
The reason they can't make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn't have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.
Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of "people" say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!