this post was submitted on 27 Apr 2026
1354 points (99.1% liked)

Programmer Humor

31173 readers
2678 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] subnormal@lemmy.dbzer0.com 2 points 5 hours ago (1 children)
[–] ivn@tarte.nuage-libre.fr 2 points 5 hours ago (1 children)

Yes, but not for targeting, as explained in the article I linked.

The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran.

[–] subnormal@lemmy.dbzer0.com 0 points 5 hours ago (1 children)

Anthropic's AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the "kill-chain" no?

[–] ivn@tarte.nuage-libre.fr 2 points 4 hours ago (1 children)

I suggest you read the article.

The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.

[–] subnormal@lemmy.dbzer0.com 0 points 4 hours ago (1 children)

Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.

Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.

[–] ivn@tarte.nuage-libre.fr 2 points 4 hours ago (1 children)

Well you'll need to source your claim. The wiki article you linked only mention Claude.

The Anthropic contract is also quite recent compared to Maven creation.

[–] subnormal@lemmy.dbzer0.com 0 points 4 hours ago* (last edited 4 hours ago) (1 children)

My sources are already linked in my two earlier comments. What about them are you disputing?

I don't see how the recency matters. That Anthropic was not involved in bombings conducted by the US military in previous years does not absolve them of their involvement in the bombing of the school in Minab.

[–] ivn@tarte.nuage-libre.fr 2 points 3 hours ago (1 children)

They only mention Claude, where is the source that "some custom AI system made by Anthropic", not a LLM, "was in the kill chain"?

I mean, I get that you want to tie Anthropic to this, I don't like them either but we should stay factual and avoid filling the gaps with some "probably". It's also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.

[–] subnormal@lemmy.dbzer0.com 0 points 3 hours ago* (last edited 3 hours ago) (1 children)

You're the one saying it's not the Claude LLM doing the targeting. Your source is that Guardian article you linked.

I don't care if it's an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?

Pointing out Anthropic's involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.

[–] ivn@tarte.nuage-libre.fr 2 points 3 hours ago (1 children)

I'm just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.

[–] subnormal@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

Okay fair enough.

Since Maven's entire business is data analysis and targeting, can we agree that if the AI is not being used for targeting, it is being used to analyze data? And those analyzed data get fed into the targeting system, so the AI is part of the kill chain?

What kind of data is being analyzed by AI? How much of it feed into the targeting system? I concede that I don't know and have no source. The US military would have to be really stupid to make these info public.

[–] ivn@tarte.nuage-libre.fr 1 points 1 hour ago

There is nothing that indicates that Anthropic's AI is used to analyze data, I'm not saying it's not, just that we don't know. I'm going to quote a smaller section of a quote I made earlier of the same Guardian article:

In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.

But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it's not Claude, it's Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that's only speculation.