this post was submitted on 02 Mar 2026
418 points (96.7% liked)
Technology
82188 readers
3281 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not a very solid point. They said they may become necessary at some point, but right now they're irresponsible.
They're not ruling it out in the future, but their focus is on today's problem.
Serinus, did you see the part where Anthropic wants to develop them with the US military?
Said safeguards being that their technology isn't being used for mass surveillance or the development of autonomous drones. It's explicitly mentioned in their statement - the one you're desperately trying to massage and misquote to make it seem like they're saying something they're not - yet anyone can just go and read it themselves
Iconoclast, I see you edited your post after I replied. You did not answer whether you accept the fact that Anthropic explicitly wanted to develop fully autonomous AI alongside the Trump Department of "War."
Either you're lying, or you're the one desperately trying to reshape the truth.
Iconoclast, you have moved beyond accidental deception into intentional lies.
Anthropic offered to work directly with the Department of "War" on R&D to improve the reliability of autonomous bombing systems.
That's what your link says. Do you deny this explicit fact?
That's your intrepretation - not a direct quote.
Iconoclast, don't be disingenuous.
The direct quote is "We have offered to work directly with the Department of War on R&D to improve the reliability of these systems". "We" meaning Anthropic. "These systems" meaning fully autonomous weapons.
Do you acknowledge they did this? Try not to weasel out of answering with more pedantry. It's almost as disturbing as your apparent defense of that Silicon Valley AI cult.
They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.
They're not ideologically against autonomous weapons systems. They're against ones that run on our current AI models.
Exactly. Which should have you condemning their warmonger ambitions, if you had moral consistency.
Which becomes true to them as soon as it doesn't kill Americans.
It's okay, you can just say you endorse America building autonomous weapons to wipe out people of any nationality. You can acknowledge that doing this is a Green Line according to Warmonger Dario Amodei. I support you coming out of that closet.