this post was submitted on 22 Apr 2025
1546 points (98.9% liked)

Memes

49969 readers
1191 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ByteJunk@lemmy.world 10 points 4 days ago (5 children)

Thank you for testing that out.

My experience with AI is that it's at a point where it can comprehend something like this very easily, and won't be tricked.

I suspect that this can, however, pollute a model if it's included as training data, especially if done regularly, as OP is suggesting.

[–] saigot@lemmy.ca 4 points 4 days ago (3 children)

If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.

[–] Azzu@lemm.ee 4 points 4 days ago (2 children)

That doesn't work, you can't train models on another model's output without degrading the quality. At least not currently.

[–] FooBarrington@lemmy.world 1 points 3 days ago

No, that's not true. All current models use output from previous models as part of their training data. You can't solely rely on it, but that's not strictly necessary.

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)