this post was submitted on 14 Jul 2025
551 points (98.2% liked)

Technology

72828 readers
3587 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] iAvicenna@lemmy.world 3 points 12 hours ago* (last edited 12 hours ago) (1 children)

On a 200mil laptop? You can run llama4 on a 64GB RAM machine, albeit slowly, which is already an upper scale model. TBF, I didn't do the math to see how much that would add up to along with salaries and server costs etc, mainly because this is pentagon we are talking about so it should already have access to some pretty decent computational capacity. So yea 200mil feels like too much when you already have most of the resources needed (compute and open LLM models for specific tasks).

The really huge upside is you don't have to share confidential information with a company whose CEO is a lunatic who will likely have no qualms about sharing that data with other agents when money and power is involved. Hell you shouldn't share any confidential/sensitive information with any of the large tech companies to be honest. They have become what they are not by sticking to ethical principles and they are likely to grossly overcharge (which defeats the purpose of outsourcing and makes it more reasonable to invest in permanent infrastructure rather). They will surely use it as some sort of leverage, %100 guaranteed.

[–] ExLisper@lemmy.curiana.net 2 points 11 hours ago (1 children)

I agree that $200M is way too much to spend on a LLMs but talking about downloading open source models is completely missing the point. They are not paying for some sort of Grok license so that they can access this amazing model. They are paying for the computational capacity needed to run this model and provide access to thousands of people over some period of time. The alternative here is to simply buy everyone a subscription to OpenAI or something.

[–] iAvicenna@lemmy.world 1 points 10 hours ago (1 children)

With open source you have the advantage of being able to use different LLMs for different tasks which can be more efficient. Surely Pentagon has access to enough compute power to set this up for a thousand people? The rest is UI, IT and fine tuning by a couple data scientists/programmers trained in LLMs. Surely it is better than Elon who changes his mind on politics every five days and thinks that twenty year olds can run critical government infrastructure because they worship him. Not an expert, just don't like big tech companies, particularly Melon.

[–] ExLisper@lemmy.curiana.net 1 points 10 hours ago (1 children)

compute power to set this up for a thousand people [...], UI, IT and fine tuning by a couple data scientists/programmers trained in LLMs.

Yes, that's what needed. It's not just about downloading an open source LLM. That was my point. I see we agree now.

[–] iAvicenna@lemmy.world 1 points 9 hours ago

Alright let's go then!