this post was submitted on 28 Feb 2026
2049 points (99.0% liked)
Technology
84019 readers
3832 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Canceling now only means you are not continuing to contribute to the war atrocities that the technology is going to be used for. If you had an account and used it, you have already contributed.
Thanks for doing this - it isn't a proper leftist get-together without some assclown imposing impossible purity tests.
Impossible purity test? That's utter bull crap. There have been many warnings about the negative uses of AI for years now, for example: https://aiforgood.itu.int/event/addressing-the-dark-sides-of-ai/
To expect people to be able to understand that this use could be expanded to committing state sponsored atrocities is not a stretch.
Well, judge not lest you too be judged...
There is no such thing as "ethical" AI coming from Big Tech. Google, Microsoft, Anthropic, Amazon, all of them built their machines without consent, all their machines have been subsidized with our taxes and resources, and Anthropic is a pro-Trump pro-foreign-dictator company that crossed every single red line until the very last one.
Anthropic was pro mass surveillance of foreigners.
It was okay with helping Trump plan criminal invasions.
It just doesn't want to be held responsible for pushing the "go" button, but we know their software was one suggestion away from doing it anyway.
That's no judgment on me. I don't use AI. I tried it one night 3-4 years ago, realized that it wasn't ready for widespread adoption, and haven't touched it since.