this post was submitted on 28 Feb 2026
1861 points (99.3% liked)

Technology

82069 readers
3446 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] ArmchairAce1944@discuss.online 4 points 18 hours ago

I last used chatGPT in 2024. Never found it satisfying.

[–] pelespirit@sh.itjust.works 36 points 1 day ago (1 children)

After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.

I think people are a little bit missing the important bit. This government wants to send out autonomous weapons along with mass surveillance. They'll just murder anyone they want, if the AI gets it right in the first place.

Here we are in Running Man and no one sees it coming. This is why Stephen King is so against this administration. He predicted it.

[–] wizardbeard@lemmy.dbzer0.com 12 points 1 day ago (1 children)

Also, mass surveillance. Not surveillance itself. And fully autonomous weapons.

Don't get distracted by the birdy folks, Anthropic is not your friend, or some great protector of the American people. They were already deeply embedded in the US Government as their product was the only one certified for use with classified documents.

They weren't standing up for us, they were splitting hairs on exactly how far they'd openly go.

I've also seen statements that Anthropic's stance against fully autonomous weapons was simply due to results not yet being as consistent as they were comfortable putting their name on, not due to any opposition towards use in/with weaponry.

OpenAI also claims to have the same limitations. So someone's lying.

[–] Hackworth@piefed.ca 5 points 1 day ago (2 children)

Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were agreeing to those use limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for mass domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the toothlessness of the guardrails.

[–] XLE@piefed.social 4 points 22 hours ago

"Guardrail" and "toothless" are basically synonymous, based on the pile of evidence that these multi-billion-dollar tech companies have been helping people kill themselves and hide the evidence.

load more comments (1 replies)

It’s because this administration wants to use AI/ML to create a list of domestic strike targets based on people who have said things dumpy doesn’t like.

[–] floofloof@lemmy.ca 34 points 1 day ago (6 children)

I can't believe people were paying for it in the first place.

[–] morto@piefed.social 15 points 1 day ago

One more boycott I can't join because I never touched the company lol

load more comments (5 replies)
[–] turdburglar@piefed.social 25 points 1 day ago (1 children)

nice headline, but wtf is windows central?

[–] zikzak025@lemmy.world 34 points 1 day ago (1 children)

A Microsoft-oriented news outlet.

Think similar to MacRumors/9to5Mac/AppleInsider for Apple.

[–] supersquirrel@sopuli.xyz 19 points 1 day ago (1 children)

There are so many levels to hell I haven't even heard of.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›