this post was submitted on 03 May 2025
764 points (97.5% liked)

Technology

69702 readers
2887 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] perestroika@lemm.ee 14 points 10 hours ago* (last edited 10 hours ago) (1 children)

The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

  • accept that negative publicity will result
  • accept that people may stop cooperating with them on this work
  • accept that their reputation will suffer as a result
  • ensure that they won't do anything illegal

After that, if they still feel their study is necesary, maybe they should run it and publish the results.

If then, some eager redditors start sending death threats, that's unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

As for the question of whether a tailor-made response considering someone's background can sway opinions better - that's been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

AI bots which take into consideration a person's background will - if implemented right - indeed be more powerful at swaying opinions.

As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn't needed after all.

[–] Djinn_Indigo@lemm.ee 2 points 3 hours ago

But those other studies didn't make the news though, did they? The thing about scientists is that they aren't just scientists, and the impact of their work goes beyond the papers that they publish. If doing something 'unethical' is what it takes to get people to wake up, then maybe the publication status is a lesser concern.