this post was submitted on 28 Feb 2026
1894 points (99.2% liked)
Technology
82069 readers
3410 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think people are a little bit missing the important bit. This government wants to send out autonomous weapons along with mass surveillance. They'll just murder anyone they want, if the AI gets it right in the first place.
Here we are in Running Man and no one sees it coming. This is why Stephen King is so against this administration. He predicted it.
Also, mass surveillance. Not surveillance itself. And fully autonomous weapons.
Don't get distracted by the birdy folks, Anthropic is not your friend, or some great protector of the American people. They were already deeply embedded in the US Government as their product was the only one certified for use with classified documents.
They weren't standing up for us, they were splitting hairs on exactly how far they'd openly go.
I've also seen statements that Anthropic's stance against fully autonomous weapons was simply due to results not yet being as consistent as they were comfortable putting their name on, not due to any opposition towards use in/with weaponry.
OpenAI also claims to have the same limitations. So someone's lying.
Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were agreeing to those use limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for mass domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the toothlessness of the guardrails.
"Guardrail" and "toothless" are basically synonymous, based on the pile of evidence that these multi-billion-dollar tech companies have been helping people kill themselves and hide the evidence.
Or, even more ironically, maybe they used ChatGPT to analyze the changes and it missed it. This would tickle me to some extent, but also solidify the terror of such a system being used to make life altering decisions.