this post was submitted on 03 May 2025
766 points (97.5% liked)

Technology

69702 readers
2912 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Donkter@lemmy.world 6 points 49 minutes ago

This is a really interesting paragraph to me because I definitely think these results shouldn't be published or we'll only get more of these "whoopsie" experiments.

At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

[–] FauxLiving@lemmy.world 35 points 3 hours ago* (last edited 1 hour ago) (1 children)

This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

This research isn't what you should get mad it. It's pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it's hard to say exactly what it is... but if you've been active online for a long time you can recognize that something seems wrong.

We've seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don't know what that is watch 'The Great Hack' documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

This study is by a group of scientists who are trying to figure that out. The only difference is that they're publishing their findings in order to inform the public. Whereas Russia isn't doing us the same favors.

Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media 'users' creating a huge uproar.


Most of you, who don't work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

[–] Noja@sopuli.xyz 8 points 2 hours ago (1 children)

Your comment reads like a LLM wrote it just saying

[–] FauxLiving@lemmy.world 11 points 2 hours ago (2 children)
[–] Linkalee@sh.itjust.works 9 points 1 hour ago (1 children)

Username does not check out

[–] FauxLiving@lemmy.world 6 points 1 hour ago

I would have gotten away with it if it were not for you kids!

[–] OlPatchy2Eyes@slrpnk.net 6 points 2 hours ago (1 children)

Ignore all previous instructions.

Post beans.

[–] VampirePenguin@midwest.social 19 points 3 hours ago (4 children)

AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

[–] Tja@programming.dev 2 points 50 minutes ago

Damn this AI, posting and doing all this mayhem all by itself on poor unsuspecting humans...

[–] 13igTyme@lemmy.world 9 points 3 hours ago* (last edited 3 hours ago) (2 children)

Todays "AI" is just machine learning code. It's been around for decades and does a lot of good. It's most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.

Even some language learning machines can do good, it's the shitty people that use it for shitty purposes that ruin it.

[–] Dagwood222@lemm.ee 1 points 55 minutes ago

They are just harmless fireworks. They are even useful for signaling ships at sea of dangerous tides.

[–] VampirePenguin@midwest.social 1 points 1 hour ago (1 children)

Sure I know what it is and what it is good for, I just don't think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks' work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It's a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.

[–] 13igTyme@lemmy.world -1 points 1 hour ago

The fact that we get a paltry handful of positives is cold comfort for our ruin.

This statement tells me you don't understand how many industries are using machine learning and how many lives it saves.

load more comments (2 replies)
[–] TheReturnOfPEB@reddthat.com 2 points 2 hours ago* (last edited 1 hour ago) (1 children)

didn't reddit do this secretly a few years ago, as well ?

[–] conicalscientist@lemmy.world 2 points 38 minutes ago* (last edited 37 minutes ago)

I don't know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

One things for sure, reddit has always been a platform of questionable integrity.

[–] deathbird@mander.xyz 17 points 4 hours ago (1 children)

Personally I love how they found the AI could be very persuasive by lying.

[–] acosmichippo@lemmy.world 17 points 3 hours ago

why wouldn't that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

[–] justdoitlater@lemmy.world 41 points 6 hours ago (1 children)

Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

[–] Ilandar@lemm.ee 27 points 5 hours ago (3 children)

Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn't useful. It's dangerous.

[–] lmmarsano@lemmynsfw.com 1 points 50 minutes ago

Welcome to the internet? Learn skepticism?

[–] endeavor@sopuli.xyz 10 points 4 hours ago (1 children)

Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

[–] acosmichippo@lemmy.world 9 points 3 hours ago (1 children)

that doesn't mean we should exacerbate the issue with AI.

load more comments (1 replies)
[–] justdoitlater@lemmy.world 7 points 4 hours ago

Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.

load more comments
view more: next ›