this post was submitted on 26 Jul 2025
81 points (92.6% liked)
Technology
73232 readers
4145 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There's two things at play here. First, all models being released these days have safety built into the training. In the West, we might focus on preventing people from harming others or hacking, and in China, they're preventing people from getting politically supportive of China. But in a way, we are all "exporting" our propaganda.
Second, as called out in the article, these responses are clearly based on the training data. That is where the misinformation starts, and you can't "fix" the problem without first fixing that data.
I don't think anyone can say with a straight face that these 2 cases are both propaganda. So called "western ptopaganda" here is really just advising the user that maybe self harm, etc. is not such a good idea. It's not explicitly telling the user completely unverifiable false facts.