this post was submitted on 28 Feb 2026
2049 points (99.0% liked)
Technology
84019 readers
3562 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This honestly strikes me as a story people don't understand. Mass surveillance is not lawful and the government thus agreed not to do that. However, they still needed the guardrails removed. People interpret this as them wanting mass surveillance, but that's not necessarily true.
I work for a company that uses AI for legal work, processing and analyzing court cases, discovery documents, etc. We had problems with AI models like Gemini and GPT refusing to do what we needed because of guardrails against violence and abuse of minors. It refused to discuss and analyze cases that involved murders described in detail, or cases involving child molestation, etc. We weren't using it for unlawful purposes, very much the opposite.
I feel like if people knew that we, like the DoD, had to use uncensored models that allowed such things, people would complain "Wow, you guys are trying to remove guardrails for child expoitation and violence! How terrible!"
Is it so shocking that a military needs their AI to work with such things even if they're not implementing it? They cannot afford to have AI in critical moments be like "sorry, my guidelines say I can't help with this."
This seems like the time Trump advised against pregnant women against using Tylenol. So people started buying and using it in protest. This is yet another reaction to Trump punishing them, but people are pretending Anthropic is making a stand for the people and OpenAI is somehow not. It's not that simple. Though now Anthropic is eating it up, especially after this last week when they started pissing on the entire tech community that started hating on them.
I have a question about those guardrails. At any point, did any of your accounts get disabled for discussing abuse in this (or any) context?
I('m guessing this happened zero times, which probably means those guardrails are just irritating suggestions designed to keep you prompting...)
Not cancelled. But they may have been flagged internally, I don't know.
We weren't violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They'd rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn't yield reliable results. So we have to use uncensored open weights models for many things. It's not SOTA, but it's better than nothing.