this post was submitted on 24 Jun 2025
634 points (98.9% liked)

Technology

71922 readers
4477 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] LoreleiSankTheShip@lemmy.ml 7 points 1 day ago (1 children)

As long as they don't use exactly the same words in the book, yeah, as I understand it.

[–] vane@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (2 children)

How they don't use same words as in the book ? That's not how LLM works. They use exactly same words if the probabilities align. It's proved by this study. https://arxiv.org/abs/2505.12546

[–] nednobbins@lemmy.zip 5 points 1 day ago (1 children)

I'd say there are two issues with it.

FIrst, it's a very new article with only 3 citations. The authors seem like serious researchers but the paper itself is still in the, "hot off the presses" stage and wouldn't qualify as "proven" yet.

It also doesn't exactly say that books are copies. It says that in some models, it's possible to extract some portions of some texts. They cite "1984" and "Harry Potter" as two books that can be extracted almost entirely, under some circumstances. They also find that, in general, extraction rates are below 1%.

[–] vane@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah but it's just a start to reverse the process and prove that there is no AI. We only started with generating text I bet people figure out how to reverse process by using some sort of Rosetta Stone. It's just probabilities after all.

[–] nednobbins@lemmy.zip 2 points 1 day ago (1 children)

That's possible but it's not what the authors found.

They spend a fair amount of the conclusion emphasizing how exploratory and ambiguous their findings are. The researchers themselves are very careful to point out that this is not a smoking gun.

[–] vane@lemmy.world 2 points 1 day ago

Yeah authors rely on the recent deep mind paper https://aclanthology.org/2025.naacl-long.469.pdf ( they even cite it ) that describes (n, p)-discoverable extraction. This is recent studies because right now there are no boundaries, basically people made something and now they study their creation. We're probably years from something like gdpr for llm.

[–] SufferingSteve@feddit.nu 6 points 1 day ago

The "if" is working overtime in your statement