this post was submitted on 29 Apr 2025
12 points (92.9% liked)

memes

15487 readers
1 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
all 13 comments
sorted by: hot top controversial new old
[–] TheTechnician27@lemmy.world 2 points 7 months ago* (last edited 7 months ago) (1 children)

To be fair, though, this experiment was stupid as all fuck. It was run on /r/changemyview to see if users would recognize that the comments were created by bots. The study's authors conclude that the users didn't recognize this. [EDIT: To clarify, the study was seeing if it could persuade the OP, but they did this in a subreddit where you aren't allowed to call out AI. If an LLM bot gets called out as such, its persuasiveness inherently falls off a cliff.]

Except, you know, Rule 3 of commenting in that subreddit is: "Refrain from accusing OP or anyone else of being unwilling to change their view, of using ChatGPT or other AI to generate text, [emphasis not even mine] or of arguing in bad faith."

It's like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. "Obviously these are all brainwashed sheep who love the regime", happily concludes the dumbest pollster in history.

[–] Sixtyforce@sh.itjust.works 2 points 7 months ago (1 children)

Wow. That's really fucking stupid.

[–] sharkfinsoup@lemmy.ml -1 points 7 months ago

I don't think so. Yeah the researchers broke the rules of the subreddit but it's not like every other company that uses AI for advertising, promotional purposes, propaganda, and misinformation will adhere to those rules.

The mods and community should not assume that just because the rules say no AI does not mean that people won't use it for nefarious purposes. While this study doesn't really add anything new we didn't already know or assume, it does highlight how we should be vigilant and cautious about what we see on the Internet.

[–] The_Picard_Maneuver@lemmy.world 0 points 7 months ago (1 children)

That story is crazy and very believable. I 100% believe that AI bots are out there astroturfing opinions on reddit and elsewhere.

I'm unsure if that's better or worse than real people doing it, as has been the case for a while.

[–] otter@lemmy.dbzer0.com 0 points 7 months ago (1 children)

Belief doesn't even have to factor; it's a plain-as-day truth. The sooner we collectively accept this fact, the sooner we change this shit for the better. Get on board, citizen. It's better over here.

[–] The_Picard_Maneuver@lemmy.world 1 points 7 months ago (1 children)

I worry that it's only better here right now because we're small and not a target. The worst we seem to get are the occasional spam bots. How are we realistically going to identify LLMs that have been trained on reddit data?

[–] otter@lemmy.dbzer0.com 1 points 7 months ago* (last edited 7 months ago)

Honestly? I'm no expert and have no actionable ideas in that direction, but I certainly hope we're able to work together as a species to overcome the unchecked greed of a few parasites at the top. ~#LuigiDidNothingWrong~

[–] needthosepylons@lemmy.world 0 points 7 months ago (1 children)

Err, yeah, I get the meme and it's quite true in its own way...

BUT... This research team REALLY need an ethics committee. A heavy handed one.

[–] fishos@lemmy.world -1 points 7 months ago

As much as I want to hate the researchers for this, how are you going to ethically test whether you can manipulate people without... manipulating people. And isn't there an argument to be made for harm reduction? I mean, this stuff is already going on. Do we just ignore it or only test it in sanitized environments that won't really apply to the real world?

I dunno, mostly just shooting the shit, but I think there is an argument to be made that this kind of research and it's results are more valuable than the potential harm. Tho the way this particular research team went about it, including changing the study fundamentally without further approval, does pose problems.

[–] arotrios@lemmy.world 0 points 7 months ago (1 children)

Deleted by moderator because you upvoted a Luigi meme a decade ago

...don't mind me, just trying to make the reddit experience complete for you...

[–] GreenKnight23@lemmy.world 0 points 7 months ago (1 children)

that's funny.

I had several of my Luigi posts and comments removed -- on Lemmy. let's see if it still holds true.

1000000933

1000000908

1000000929

[–] dragonfucker@lemmy.nz -1 points 7 months ago (1 children)

That's because your username is wrong. Your username is GreenKnight23@lemmy.world, but it should be GreenKnight23@lemmy.nz. That would fix your problem.