this post was submitted on 24 Jun 2025
634 points (98.9% liked)

Technology

71922 readers
5160 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FreedomAdvocate 16 points 1 day ago (3 children)

Makes sense. AI can “learn” from and “read” a book in the same way a person can and does, as long as it is acquired legally. AI doesn’t reproduce a work that it “learns” from, so why would it be illegal?

Some people just see “AI” and want everything about it outlawed basically. If you put some information out into the public, you don’t get to decide who does and doesn’t consume and learn from it. If a machine can replicate your writing style because it could identify certain patterns, words, sentence structure, etc then as long as it’s not pretending to create things attributed to you, there’s no issue.

[–] badcommandorfilename@lemmy.world 9 points 1 day ago (1 children)

Ask a human to draw an orc. How do they know what an orc looks like? They read Tolkien's books and were "inspired" Peter Jackson's LOTR.

Unpopular opinion, but that's how our brains work.

[–] burntbacon@discuss.tchncs.de 1 points 1 day ago* (last edited 1 day ago)

Fuck you, I won't do what you tell me!

>.>

<.<

spoilerI was inspired by the sometimes hilarious dnd splatbooks, thank you very much.

[–] elrik@lemmy.world 5 points 1 day ago (2 children)

AI can “learn” from and “read” a book in the same way a person can and does

This statement is the basis for your argument and it is simply not correct.

Training LLMs and similar AI models is much closer to a sophisticated lossy compression algorithm than it is to human learning. The processes are not at all similar given our current understanding of human learning.

AI doesn’t reproduce a work that it “learns” from, so why would it be illegal?

The current Disney lawsuit against Midjourney is illustrative - literally, it includes numerous side-by-side comparisons - of how AI models are capable of recreating iconic copyrighted work that is indistinguishable from the original.

If a machine can replicate your writing style because it could identify certain patterns, words, sentence structure, etc then as long as it’s not pretending to create things attributed to you, there’s no issue.

An AI doesn't create works on its own. A human instructs AI to do so. Attribution is also irrelevant. If a human uses AI to recreate the exact tone, structure and other nuances of say, some best selling author, they harm the marketability of the original works which fails fair use tests (at least in the US).

[–] FreedomAdvocate 1 points 1 day ago (2 children)

Your very first statement calling my basis for my argument incorrect is incorrect lol.

LLMs “learn” things from the content they consume. They don’t just take the content in wholesale and keep it there to regurgitate on command.

On your last part, unless someone uses AI to recreate the tone etc of a best selling author *and then markets their book/writing as being from said best selling author, and doesn’t use trademarked characters etc, there’s no issue. You can’t copyright a style of writing.

[–] elrik@lemmy.world 1 points 1 day ago

I'll repeat what you said with emphasis:

AI can “learn” from and “read” a book in the same way a person can and does

The emphasized part is incorrect. It's not the same, yet your argument seems to be that because (your claim) it is the same, then it's no different from a human reading all of these books.

Regarding your last point, copyright law doesn't just kick in because you try to pass something off as an original (by, for ex, marketing a book as being from a best selling author). It applies based on similarity whether you mention the original author or not.

[–] WraithGear@lemmy.world 0 points 1 day ago* (last edited 1 day ago) (2 children)

If what you are saying is true, why were these ‘AI’s” incapable of rendering a full wine glass? It ‘knows’ the concept of a full glass of water, but because of humanities social pressures, a full wine glass being the epitome of gluttony, art work did not depict a full wine glass, no matter how ai prompters demanded, it was unable to link the concepts until it was literally created for it to regurgitate it out. It seems ‘AI’ doesn’t really learn, but regurgitates art out in collages of taken assets, smoothed over at the seams.

[–] FaceDeer@fedia.io 2 points 1 day ago (1 children)
[–] WraithGear@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

“it was unable to link the concepts until it was literally created for it to regurgitate it out“

-WraithGear

The’ problem was solved before their patch. But the article just said that the model is changed by running it through a post check. Just like what deep seek does. It does not talk about the fundamental flaw in how it creates, they assert if does, like they always did

[–] FaceDeer@fedia.io 2 points 1 day ago

I don't see what distinction you're trying to draw here. It previously had trouble generating full glasses of wine, they made some changes, now it can. As a result, AIs are capable of generating an image of a full wine glass.

This is just another goalpost that's been blown past, like the "AI will never be able to draw hands correctly" thing that was so popular back in the day. Now AIs are quite good at drawing hands, and so new "but they can't do X!" Standards have been invented. I see no fundamental reason why any of those standards won't ultimately be surpassed.

[–] alsimoneau@lemmy.ca 3 points 1 day ago (2 children)
[–] WraithGear@lemmy.world 6 points 1 day ago (1 children)

1 it’s not full, but closer then it was.

  1. I specifically said that the AI was unable to do it until someone specifically made a reference so that it could start passing the test so it’s a little bit late to prove much.
[–] alsimoneau@lemmy.ca 2 points 1 day ago (1 children)

The concept of a glass being full and of a liquid being wine can probably be separated fairly well. I assume that as models got more complex they started being able to do this more.

[–] WraithGear@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

You mean when the training data becomes more complete. But that’s the thing, when this issue was being tested, the’AI’ would swear up and down that the normally filled wine glasses were full, when it was pointed out that it was not indeed full, the ‘AI’ would agree, and change some other aspect of the picture it didn’t fully understand. You got wine glasses where the wine would half phase out of the bounds of the cup. And yet still be just as empty. No amount of additional checks will help without an appropriate reference

I use ‘AI’ extensively, i have one running locally on my computer, i swap out from time to time. I don’t have anything against its use with certain exceptions. But i can not stand people personifying it beyond its scope

Here is a good example. I am working on an APP so every once in a wile i will send it code to check. But i have to be very careful. The code it spits out will be unoptimized like: variable1=IF (variable2 IS true, true, false) .

Some have issues with object permanence, or the consideration of time outside its training data. Its like saying a computer can generate a true random number, by making the function to calculate a number more convoluted.

[–] antonim@lemmy.dbzer0.com 0 points 1 day ago (1 children)

Bro are you a robot yourself? Does that look like a glass full of wine?

[–] alsimoneau@lemmy.ca 2 points 1 day ago (2 children)

If someone ask for a glass of water you don't fill it all the way to the edge. This is way overfull compared to what you're supposed to serve.

[–] antonim@lemmy.dbzer0.com 2 points 22 hours ago

Oh man...

That is the point, to show how AI image generators easily fail to produce something that rarely occurs out there in reality (i.e. is absent from training data), even though intuitively (from the viewpoint of human intelligence) it seems like it should be trivial to portray.

[–] wpb@lemmy.world 1 points 22 hours ago

Omg are you an llm?

[–] jwmgregory@lemmy.dbzer0.com -5 points 1 day ago (1 children)

Even if we accept all your market liberal premise without question... in your own rhetorical framework the Disney lawsuit should be ruled against Disney.

If a human uses AI to recreate the exact tone, structure and other nuances of say, some best selling author, they harm the marketability of the original works which fails fair use tests (at least in the US).

Says who? In a free market why is the competition from similar products and brands such a threat as to be outlawed? Think reasonably about what you are advocating... you think authorship is so valuable or so special that one should be granted a legally enforceable monopoly at the loosest notions of authorship. This is the definition of a slippery-slope, and yet, it is the status quo of the society we live in.

On it "harming marketability of the original works," frankly, that's a fiction and anyone advocating such ideas should just fucking weep about it instead of enforce overreaching laws on the rest of us. If you can't sell your art because a machine made "too good a copy" of your art, it wasn't good art in the first place and that is not the fault of the machine. Even big pharma doesn't get to outright ban generic medications (even tho they certainly tried)... it is patently fucking absurd to decry artist's lack of a state-enforced monopoly on their work. Why do you think we should extend such a radical policy towards... checks notes... tumblr artists and other commission based creators? It's not good when big companies do it for themselves through lobbying, it wouldn't be good to do it for "the little guy," either. The real artists working in industry don't want to change the law this way because they know it doesn't work in their favor. Disney's lawsuit is in the interest of Disney and big capital, not artists themselves, despite what these large conglomerates that trade in IPs and dreams might try to convince the art world writ large of.

[–] elrik@lemmy.world 3 points 1 day ago (1 children)

you think authorship is so valuable or so special that one should be granted a legally enforceable monopoly at the loosest notions of authorship

Yes, I believe creative works should be protected as that expression has value and in a digital world it is too simple to copy and deprive the original author of the value of their work. This applies equally to Disney and Tumblr artists.

I think without some agreement on the value of authorship / creation of original works, it's pointless to respond to the rest of your argument.

[–] jwmgregory@lemmy.dbzer0.com 1 points 1 day ago* (last edited 1 day ago) (1 children)

I think without some agreement on the value of authorship / creation of original works, it's pointless to respond to the rest of your argument.

I agree, for this reason we’re unlikely to convince each other of much or find any sort of common ground. I don’t think that necessarily means there isn’t value in discourse tho. We probably agree more than you might think. I do think authors should be compensated, just for their actual labor. Art itself is functionally worthless, I think trying to make it behave like commodities that have actual economic value through means of legislation is overreach. It would be more ethical to accept the physical nature of information in the real world and legislate around that reality. You… literally can “download a car” nowadays, so to speak.

If copying someone’s work is so easily done why do you insist upon a system in which such an act is so harmful to the creators you care about?

[–] elrik@lemmy.world 2 points 23 hours ago

Because it is harmful to the creators that use the value of their work to make a living.

There already exists a choice in the marketplace: creators can attach a permissive license to their work if they want to. Some do, but many do not. Why do you suppose that is?

[–] antonim@lemmy.dbzer0.com 2 points 1 day ago

AI can “learn” from and “read” a book in the same way a person can and does,

If it's in the same way, then why do you need the quotation marks? Even you understand that they're not the same.

And either way, machine learning is different from human learning in so many ways it's ridiculous to even discuss the topic.

AI doesn’t reproduce a work that it “learns” from

That depends on the model and the amount of data it has been trained on. I remember the first public model of ChatGPT producing a sentence that was just one word different from what I found by googling the text (from some scientific article summary, so not a trivial sentence that could line up accidentally). More recently, there was a widely reported-on study of AI-generated poetry where the model was requested to produce a poem in the style of Chaucer, and then produced a letter-for-letter reproduction of the well-known opening of the Canterbury Tales. It hasn't been trained on enough Middle English poetry and thus can't generate any of it, so it defaulted to copying a text that probably occurred dozens of times in its training data.