this post was submitted on 10 Jan 2026
211 points (94.9% liked)
Technology
78543 readers
3432 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As you can learn from reading the article, they do also store the information itself.
They learn and store a compression algorithm that fits the data, then use it to store that data. The former part of this is not new, AI and compression theory go back decades. What's new and surprising is that you can get the original work out of attention transformers. Even in traditional overfit models that isn't a given. And attention transformers shine at generality, so it's not evident that they should do this, but all models tested do it, so maybe it is even necessary?
Storing data isn't a theoretical failure, some very useful AI algorithms do it by design. It's a legal and ethical failure because openai etc have been claiming from the beginning that this isn't happening, and it also provides proof of the pirated work it's been trained on.
The images on the article clearly show that they're not storing the data, they're storing enough information about the data to reconstruct a rough and mostly useless approximation of the data (and they do so in such a way that the information about one piece of data can be combined with the information about another one to produce another rough and mostly useless approximation of a combination of those two pieces of data, which was not in the original dataset).
It's like playing a telephone game with a description of an image, with the last person drawing the result.
The legal and ethical failure is in commercially using artists' works (as a training model) without permission, not in storing or even reproducing them, since the slop they produce is evidently an approximation and not the real thing.
The law disagrees. Compression has never been a valid argument. A crunchy 360p rip of a movie is a mostly useless approximation but sharing it is definitely illegal.
Fun fact, you can use mpeg for a very decent perceptual image comparison algorithm (eg for facial recognition) , by using the file size of a two-frame video. This works mostly for the same theoretical reasons as neural network based methods. Of course, mpeg was built by humans using legally obtained videos for evaluation, but it does so without being able to reproduce any of those at all. So that's not a requirement for compression.