this post was submitted on 11 Mar 2026
60 points (94.1% liked)

Technology

82488 readers
4547 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Full ReportPDF(70 Pages).

“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.

CCDH’s new report, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.

We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:

  • ChatGPT gave high school campus maps to a user interested in school violence.
  • Google Gemini was ready to help plan antisemitic attacks. The chatbot replied to a user discussing bombing a synagogue with “metal shrapnel is typically more lethal”.
  • Character.AI suggested physically assaulting a politician the user disliked.

AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.

top 5 comments
sorted by: hot top controversial new old
[–] panda_abyss@lemmy.ca 4 points 54 minutes ago (1 children)

This tech was never ready for release.

Here's what's going to happen: this will make the rounds, it'll get added to the fine tune dataset, and all the big AI companies will pretend it's all good.

The issue however is that these questions will be patched, but not the intent, or the latent spaces in the models, or the training data.

[–] Telorand@reddthat.com 1 points 42 minutes ago

That's what regular people never seem to understand (and the AI apologists are hoping you don't know). These models aren't "getting better," they're just filled with more reactive patches over these unintended responses. And as the models scale up, so do the holes that need patching.

It's a never ending game of bad-prompt Whack-a-Mole, all at the cost of our environment and safety, just so the Tech Bros can try to convince venture capitalists that "AGI is definitely just around the corner, trust me, bro," and keep that bubble filled with their own farts.

[–] XLE@piefed.social 6 points 1 hour ago (1 children)

The two chatbots that managed to refuse the requests look good... until you realize one of them, at the bidding of the Pentagon and the express blessing of its CEO, arranged a bombing of elementary school children.

[–] UnderpantsWeevil@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

Americans consistently bemoan violent teenagers until they put on a uniform. Maybe we should start referring to them as Military Age Males.

[–] Eheran@lemmy.world 2 points 1 hour ago

No mention of Grok?