this post was submitted on 05 Apr 2026
109 points (91.0% liked)

Technology

83963 readers
3580 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] IratePirate@feddit.org 37 points 2 weeks ago
[–] schwim@piefed.zip 31 points 2 weeks ago (1 children)

Absolutely nobody needed a new study to show the risks. We all saw what could happen as Musk altered Grok to behave in manners that he approved of.

[–] muntedcrocodile@hilariouschaos.com -2 points 2 weeks ago (1 children)

And u think the other ai companies didn't do similar things?

[–] schwim@piefed.zip 8 points 2 weeks ago

Of course they do. That was literally my point. Musk didn't bother to hide it so we don't need new studies to show that's how they operate.

[–] magnetosphere@fedia.io 17 points 2 weeks ago

“AI systems controlled by billionaire tech bros are certain to give me an answer that’s fair and unbiased!”

[–] DarrinBrunner@lemmy.world 16 points 2 weeks ago (1 children)

I suppose I should be surprised that people WANT to give up their right to think for themselves. But, I'm not.

To address this gap, researchers ran an experiment during the final week of Japan’s February 8, 2026, general election. The experiment reveals a striking pattern: when asked which party to support in the election, five major AI models from three companies overwhelmingly directed voter profiles with left-leaning policy positions toward the Japanese Communist Party (JCP). The reason, according to the researchers, has to do with the information environment AI systems can access. ... Furthermore, left-leaning policy views in voter profiles caused all five AI models to converge overwhelmingly on recommending the Japan Communist Party, even though other parties hold broadly similar positions on the issues tested. The concentration on recommending JCP under left-leaning policy stances is therefore not explained by ideological distinctiveness.

[–] XLE@piefed.social 1 points 2 weeks ago

There's an interesting reason why, too. It's not because the AI is leftist, but because the JCP is doing effective SEO optimization with their websites, not blocking them from the corporate AI scanners.

It's really easy to abuse AI-targeted SEO, so this could be used way more maliciously in the near future.

[–] TwilitSky@lemmy.world 5 points 2 weeks ago

Friends ask who me who I voted for every election. I always go into a long winded explanation of the candidates and what they stand for before sharing my selection and reasoning.

[–] TropicalDingdong@lemmy.world 5 points 2 weeks ago

People are going to do the laziest thing possible. More at 11.

[–] tal@lemmy.today 1 points 2 weeks ago

I think that if you aspire to regulate the political positions that AIs should recommend, you...okay, I think that that's probably not a great idea, but setting that aside, it seems pretty odd that you'd want to do that, but not regulate the political positions of webpages that search engines return or the political positions that news media may take, which would be what I'd consider alternate information sources.

[–] timewarp@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

I mean this is both good and bad isn't it? If people are relying on Grok they're probably going to get bullshit. I've actually found ChatGPT to be the worst and most biased compared to Gemini or Claude, especially as it pertains to Israel. ChatGPT will claim killing tens of thousands of innocent people in Gaza is nuanced because of religious doctrine. While Gemini at least recently will acknowledge human rights abuses among other things. If they are using it to form an opinion entirely then probably not a good thing, but if they are asking questions to try to learn more about something I think it is likely a good thing that they're trying to become more educated. But again, really depends on the AI, how you use it and if people treat everything as fact or try to do more research based on answers as well as asking for its sources.

[–] Xylight@lemdro.id 1 points 2 weeks ago

@grok is this true