this post was submitted on 12 Feb 2026
154 points (100.0% liked)

Technology

81078 readers
4206 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

An AI safety researcher has quit US firm Anthropic with a cryptic warning that the "world is in peril".

In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world.

He said he would instead look to pursue writing and studying poetry, and move back to the UK to "become invisible".

It comes in the same week that an OpenAI researcher said she had resigned, sharing concerns about the ChatGPT maker's decision to deploy adverts in its chatbot.

Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company's move to include adverts for some users.

The company, which was formed in 2021 by a breakaway team of early OpenAI employees, has positioned itself as having a more safety-orientated approach to AI research compared with its rivals.

Sharma led a team there which researched AI safeguards.

He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching "how AI assistants could make us less human".

But he said despite enjoying his time at the company, it was clear "the time has come to move on".

****"The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment," Sharma wrote.

He said he had "repeatedly seen how hard it is to truly let our values govern our actions" - including at Anthropic which he said "constantly face pressures to set aside what matters most".

Sharma said he would instead look to pursue a poetry degree and writing.

He added in a reply: "I'll be moving back to the UK and letting myself become invisible for a period of time."****

Those departing AI firms which have loomed large in the latest generative AI boom - and sought to retain talent with huge salaries or compensation offers - often do so with plenty of shares and benefits intact. Eroding principles

Anthropic calls itself a "public benefit corporation dedicated to securing [AI's] benefits and mitigating its risks".

In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.

It has released reports on the safety of its own products, including when it said its technology had been "weaponised" by hackers to carry out sophisticated cyber attacks.

But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.

Like OpenAI, the firm also seeks to seize on the technology's benefits, including through its own AI products such as its ChatGPT rival Claude.

It recently released a commercial that criticised OpenAI's move to start running ads in ChatGPT.

OpenAI boss Sam Altman had previously said he hated ads and would use them as a "last resort".

Last week, he hit back at the advert's description of this as a "betrayal" - but was mocked for his lengthy post criticising Anthropic.

Writing in the New York Times on Wednesday, former OpenAI researcher Zoe Hitzig said she had "deep reservations about OpenAI's strategy".

"People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife," she wrote.

"Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent."

Hitzig said a potential "erosion of OpenAI's own principles to maximise engagement" might already be underway at the firm.

She said she feared this may accelerate if the company's approach to advertising does not reflect its values to benefit humanity.

BBC News has approached OpenAI for a response.

top 14 comments
sorted by: hot top controversial new old
[–] BenderRodriguez@lemmy.world 47 points 2 hours ago

A researcher left his high seat, With a warning of global defeat. To the UK he’ll flee, To write poetry, And vanish in shadowy retreat.

[–] Hackworth@piefed.ca 18 points 2 hours ago* (last edited 1 hour ago) (1 children)

FWIW, Anthropic did just fund a pro-regulation super PAC to oppose OpenAI's/Plantir's pro-Trump/anti-regulation PAC, and:

The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance. Reuters

But I kinda doubt they'll be able to play the good guy for long.

[–] XLE@piefed.social 4 points 1 hour ago (1 children)

The regulations this PAC promotes are almost laughable. Do they mention CSAM generation? Deepfakes? Pollution? Water table destruction? Suicide encouragement? Nope.

Those harms are apparently acceptable.

Instead, they say we should focus on "the nearest-term high risks: AI-enabled biological weapons and cyberattacks." Sci-fi fiction.

[–] Hackworth@piefed.ca 4 points 1 hour ago* (last edited 37 minutes ago) (1 children)

They're advocating for transparency and for states to be able to have their own AI laws. I see that as positive. And as part of that transparency, Anthropic publishes its system prompts, which go through with every message. They devote a significant portion to mental health, suicide prevention, not enabling mania, etc. So I wouldn't say they see it as "acceptable."

[–] XLE@piefed.social 3 points 47 minutes ago (1 children)

If Anthropic actually wants to prevent self-harm and CSAM through regulation, why didn't they recommend regulating those things?

Anthropic executive Jason Clinton harassed LGBT Discord users, so forgive me if I don't take their PR at face value. No AI Corpo is your friend, which is a lesson I thought we had learned from Sam Altman and Elon Musk already.

[–] Hackworth@piefed.ca 1 points 24 minutes ago

So what I meant by "doubt they’ll be able to play the good guy for long" is exactly that no corpo is your friend. But I also believe perfect is the enemy of good, or at least better. I want to encourage companies to be better, knowing full well that they will not be perfect. Since Anthropic doesn't make image/video/audio generators, they may just not see CSAM as a directly related concern for the company. A PAC doesn't have to address every harm to be a source of good.

As for self-harm, that's an alignment concern, the main thing they do research on. And based on what they've published, they know that perfect alignment is not in our foreseeable future. They've made a lot of recent improvements that make it demonstrably harder to push a bot to dark traits. But they know damn well they can't prevent it without some structural breakthroughs. And who knows if those will ever come?

I read that 404 media piece when it got posted here, and this is also probably that guy's fault. And frankly, Dario's energy creeps me out. I'm not putting Anthropic on a pedestal here, they're just... the least bad... for now?

[–] X@piefed.world 3 points 58 minutes ago (1 children)

He said he […] move back to the UK to "become invisible".

Literally won’t be happening, but okay.

[–] excursion22@piefed.ca 1 points 22 minutes ago

Yeah, not really the best place to go to be invisible. However, who knows if that's actually where he'll go.

[–] hansolo@lemmy.today 42 points 2 hours ago (1 children)

Translation: "All y'all gonna get sued so hard one day. I'm out, I got paid $74 million last year."

[–] panda_abyss@lemmy.ca 32 points 2 hours ago* (last edited 2 hours ago)

If I got paid $74M a year, I would work one year.

I get it.

[–] HubertManne@piefed.social 3 points 1 hour ago

All the tasty humans get so paranoid about ai and how it might be trying to hide among them and blend in so it can prey on them one by one. Its like lower your temperature my male siblings!

[–] ArgentRaven@lemmy.world 8 points 2 hours ago (1 children)

This is literally the plot of Player Piano by Kurt Vonnegut. Interesting that he was able to predict it that far ahead.

[–] TheRealKuni@piefed.social 6 points 1 hour ago

It’s because he was unstuck in time. Slaughterhouse V was actually autobiographical.

[–] XLE@piefed.social 1 points 1 hour ago

"AI safety" continues to be a grift to promote AI products.

Mrinank Sharma of Anthropic should be remembered as a liar for lines like

The world is in peril. And not just from AI or bioweapons, but from a whole series of interconnected crises unfolding in this very moment

Despite his letter insisting he's leaving Anthropic to be more honest, he's just regurgitating the same propaganda as before, making promises to mislead investors, and advocating for regulations that don't address any real harms, but will help them monopolize a market.