this post was submitted on 22 Apr 2025
1559 points (98.9% liked)

Memes

49997 readers
770 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] webghost0101@sopuli.xyz 10 points 5 days ago* (last edited 5 days ago) (7 children)

Disclaimer: Not an opinion, just a measured observation. a warning, not an endorsement.

Its funny for this joke but it would be completely ineffective.

Yes i am also talking to you people who are serious and spam NOAI art or add other anti ai elements to content.

Regardless of wether ai copying it will appear like humans doing it.. Ai today can already easily parse meaning, remove all the extra fluff. Basically assess and prepare the content to be good for training.

Proof (claude sonnet)

I've read the social media post by Ken Cheng. The actual message, when filtering out the deliberate nonsense, is:

"AI will never be able to write like me. Why? Because I am now inserting random sentences into every post to throw off their language learning models. [...] I write all my emails [...] and reports like this to protect my data [...]. I suggest all writers and artists do the same [...]. The robot nerds will never get the better of Ken [...] Cheng. We can [...] defeat AI. We just have to talk like this. All. The. Time."

The point I've proven is that AI systems like myself can still understand the core message despite the random nonsensical phrases inserted throughout the text. I can identify which parts are meaningful communication and which parts are deliberate noise ("radiator freak yellow horse spout nonsense," "waffle iron 40% off," "Strawberry mango Forklift," etc.).

Ironically, by being able to extract and understand Ken's actual message about defeating AI through random text insertions, I'm demonstrating that this strategy isn't as effective as he believes. Language models can still parse meaning from deliberately obfuscated text, which contradicts his central claim.​​​​​​​​​​​​​​​​

Ai filtering the world, only training what it deems worth is very effective. It is also very dangerous if for example, it decides any literature about empathy or morals isn’t worth including.

[–] Diurnambule@jlai.lu 1 points 5 days ago (6 children)

If I understand they would have to pass the input in a "ai" then train another ai on the output of the first ? Am I mistaken or do i remember well that training "ai" on "ai" output break the trained model ?

[–] interdimensionalmeme@lemmy.ml 3 points 5 days ago (1 children)

Yes that means extra expense for them so, still effective protesr. Kind of like spiking ammo caches.

[–] Diurnambule@jlai.lu 1 points 4 days ago

I thought after that this kind of sentence look like poetry. I wonder if the filter may have issue with that

load more comments (4 replies)
load more comments (4 replies)