this post was submitted on 27 Apr 2026
1500 points (99.0% liked)

Programmer Humor

31190 readers
2556 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rapide@piefed.zip 66 points 1 day ago (1 children)
[–] Vegan_Joe@piefed.world 16 points 1 day ago (20 children)

Dumb question, but...is Claude worse than GPT or Gemini?

I was under the impression that it was the lesser of evils

[–] ptu@sopuli.xyz 6 points 16 hours ago

I just started with Claude and I can’t yet distinguish when it has actually done something it says it has done. With ChatGPT I can see through the bullshit quite well by now. At first I was happy when I thought Claude was rid of that bullshit, but turns out it’s just a different type of bullshit.

The UI and file handling is better in Claude though, and supposedly you can make it create skills which are like instruction booklets on how to do some tasks and then export and share them. But the ones I created were lost during the weekend so I’m not sure how robust they actually are.

[–] Epp@lemmus.org 51 points 1 day ago (1 children)

They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.

[–] subnormal@lemmy.dbzer0.com 36 points 23 hours ago* (last edited 23 hours ago) (2 children)

Anthropic's AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/

The company is suing to be able to supply the US military again.

[–] ivn@tarte.nuage-libre.fr 6 points 14 hours ago (1 children)
[–] subnormal@lemmy.dbzer0.com 2 points 11 hours ago (1 children)
[–] ivn@tarte.nuage-libre.fr 3 points 10 hours ago (1 children)

Yes, but not for targeting, as explained in the article I linked.

The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran.

[–] subnormal@lemmy.dbzer0.com 0 points 10 hours ago (1 children)

Anthropic's AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the "kill-chain" no?

[–] ivn@tarte.nuage-libre.fr 2 points 10 hours ago (1 children)

I suggest you read the article.

The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.

[–] subnormal@lemmy.dbzer0.com 0 points 10 hours ago (1 children)

Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.

Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.

[–] ivn@tarte.nuage-libre.fr 2 points 10 hours ago (1 children)

Well you'll need to source your claim. The wiki article you linked only mention Claude.

The Anthropic contract is also quite recent compared to Maven creation.

[–] subnormal@lemmy.dbzer0.com 0 points 9 hours ago* (last edited 9 hours ago) (1 children)

My sources are already linked in my two earlier comments. What about them are you disputing?

I don't see how the recency matters. That Anthropic was not involved in bombings conducted by the US military in previous years does not absolve them of their involvement in the bombing of the school in Minab.

[–] ivn@tarte.nuage-libre.fr 2 points 9 hours ago (1 children)

They only mention Claude, where is the source that "some custom AI system made by Anthropic", not a LLM, "was in the kill chain"?

I mean, I get that you want to tie Anthropic to this, I don't like them either but we should stay factual and avoid filling the gaps with some "probably". It's also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.

[–] subnormal@lemmy.dbzer0.com 0 points 9 hours ago* (last edited 9 hours ago) (1 children)

You're the one saying it's not the Claude LLM doing the targeting. Your source is that Guardian article you linked.

I don't care if it's an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?

Pointing out Anthropic's involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.

[–] ivn@tarte.nuage-libre.fr 2 points 9 hours ago (1 children)

I'm just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.

[–] subnormal@lemmy.dbzer0.com 1 points 7 hours ago (1 children)

Okay fair enough.

Since Maven's entire business is data analysis and targeting, can we agree that if the AI is not being used for targeting, it is being used to analyze data? And those analyzed data get fed into the targeting system, so the AI is part of the kill chain?

What kind of data is being analyzed by AI? How much of it feed into the targeting system? I concede that I don't know and have no source. The US military would have to be really stupid to make these info public.

[–] ivn@tarte.nuage-libre.fr 1 points 7 hours ago (1 children)

There is nothing that indicates that Anthropic's AI is used to analyze data, I'm not saying it's not, just that we don't know. I'm going to quote a smaller section of a quote I made earlier of the same Guardian article:

In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.

But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it's not Claude, it's Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that's only speculation.

[–] subnormal@lemmy.dbzer0.com 1 points 4 hours ago (1 children)

Okay. I guess we at least agree on the facts.

You are giving the company a huge amount of benefit of doubt and I don't understand why. May I ask: If it was Elon Musk's xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?

[–] ivn@tarte.nuage-libre.fr 1 points 4 hours ago (1 children)

It wouldn't change anything and I'm confused as to why you think it would and why you think I'm "giving a huge amount of benefit of doubt".

I'm just pointing at what we know, what we don't know and what you are just making up.

[–] subnormal@lemmy.dbzer0.com 1 points 3 hours ago (1 children)

Facts:

  • Anthropic supplies some AI system to Maven
  • Maven analyzes data and determine bombing targets

My conclusion: Anthropic's AI is in the US military's kill chain which killed 120 children.

Your conclusion: The LLM did not directly target the school. We don't know how it was used. It was also not there from the beginning so probably not probably part of the "core system."

[–] ivn@tarte.nuage-libre.fr 1 points 3 hours ago

That's not my conclusion, that's just mostly coming from the Guardian article. I say mostly because you're missing one part, we know how the LLM is used.

That's why I'm asking you to source your "conclusion".

[–] Epp@lemmus.org 8 points 19 hours ago (1 children)

That's one way to spin it.

My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it's appropriate uses.

[–] subnormal@lemmy.dbzer0.com 2 points 18 hours ago (1 children)

What sane use? And how does this company plan to prevent the fascists from using it to kill another 120 children?

The only not-evil move is to not sell dual-use goods to fascists in the first place.

[–] Epp@lemmus.org 7 points 17 hours ago (1 children)

You seriously can't think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.

How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?

What's that? Companies don't need to accomplish impossible tasks to have a viable product? I guess it's only AI that has insurmountable demands placed on them by reactionaries.

The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.

[–] subnormal@lemmy.dbzer0.com -1 points 11 hours ago (1 children)

I wasn't clear. What I meant was: what sane things could a fascist military use AI for?

"Reactionary" lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.

[–] Epp@lemmus.org 1 points 4 hours ago (1 children)

Your problem is clearly with the fascists, as it should be, and AI is getting caught in the crossfire by your ire. You just can't see/admit it yet.

Unless you live in a cave, which you obviously don't since you're here on the Internet sharing your wisdom with us, then you are participating in business and activities that enrich the fascists. It's just a fact of life when they own everything. There is no ethical consumption under capitalism.

[–] subnormal@lemmy.dbzer0.com 1 points 3 hours ago (1 children)

I have nothing against AI but everything against a certain AI company that is fully in bed with fascists.

There is no ethical consumption under capitalism.

Please do not use this slogan as an excuse to not sought out the least unethical option for your consumptions.

[–] Epp@lemmus.org 1 points 55 minutes ago* (last edited 39 minutes ago) (1 children)

I have nothing against AI but everything against a certain AI company that is fully in bed with fascists.

Are you talking about Google? Apple? Meta? Twitter? Microsoft? OpenAI?

You can't be talking about the one company that was banned by the fascist government for not complying with their demands, because a company fully in bed with fascists would not be banned for refusing to comply. Yet, it seems in your confusion that is exactly what you're implying.

Please do not use this slogan as an excuse to not sought out the least unethical option for your consumptions.

I don't, and that would be Anthropic's Claude. I don't know about you, but I don't have the hardware for a local LLM at the speed or proficiency they offer. Maybe you're so fortunate, and are judging the choices of the less fortunate for not passing your purity test?

[–] subnormal@lemmy.dbzer0.com 1 points 31 minutes ago

You must be American. I am talking about Kimi, Mistral, GLM, Liquid, Minimax, Arcee, Qwen, Deepseek, Xiaomi.

And you are of course allowed to use cloud inference if you don't have the hardware to run locally. Just choose an inference service that is not in bed with fascists. There are plenty. Good luck and have a nice day.

[–] ZoteTheMighty@lemmy.zip 5 points 18 hours ago

Claude is almost always the better model compared to GPT. I find that this is a good leaderboard. However, both Claude and GPT have similar business models: make sure everything they do is completely proprietary, and keep everything behind a monthly paywall. They both run massive data centers to train their models, and neither really deserves the term "Artificial Intelligence".

[–] subnormal@lemmy.dbzer0.com 12 points 23 hours ago (2 children)

There are many less evils. Use open source/weight AI like Kimi, GLM, Deepseek, Mistral, Olmo, Arcee, Minimax, Qwen, Exaone, NVidia, Sarvam...

If you don't have the hardware to run locally, you can pay for API. If you find the company problematic for whatever reason, you can switch to the same model served by a third party (possible because the model weights are publicly released).

[–] TachyonTele@piefed.social 5 points 22 hours ago (1 children)

Other than wanting a verbose answer to a question, what is it for?

[–] subnormal@lemmy.dbzer0.com 7 points 19 hours ago

For me I just use it to get verbose answers to questions.

I use open-weight LLM over search engines when I can, because Google/Bing/Yandex are complete proprietary black boxes run by corporations of questionable morality.

[–] Grail@multiverse.soulism.net 3 points 20 hours ago

Or you could just not use LLMs. Fuck AI.

[–] Dojan@pawb.social 10 points 1 day ago

In what manner? Capabilities, or belonging to an evil corporation that happily steals data and works to undermine democracy?

[–] IndustryStandard@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago)

It is better than GPT and Gemini but not great. Claude some US military contracts. At least to public knowledge.

https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html

Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic.

The announcement came after Anthropic executives refused to comply with the government’s demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of America.

Anthropic’s models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.

[–] rozodru@piefed.world -1 points 22 hours ago (1 children)

less of the evils. That being said as far as quality goes Claude has taken a very noticeable decline in quality within the past several months. used to be half decent but now 8 to 9 times out of 10 you're going to get an hallucination for a solution. Anthropic has REALLY dropped the ball with Claude and Claude code. absolute garbage LLM now.

[–] some_designer_dude@lemmy.world 5 points 19 hours ago

This could be user error, to some degree.

load more comments (12 replies)