this post was submitted on 12 Feb 2026
1183 points (98.4% liked)

Technology

81208 readers
5588 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] unspeakablehorror@thelemmy.club 54 points 4 days ago (37 children)

Off with their heads! GO self-hosted, go local... toss the rest in the trash can before this crap gets a foothold and fully enshitifies

[–] ch00f@lemmy.world 5 points 4 days ago (10 children)

GO self-hosted,

So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server's Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.

I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn't even thought to try and it worked.

But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.

8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?

[–] SirHaxalot@nord.pub 0 points 4 days ago

Honestly you pretty much don't. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It's not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you're going to be behind the purpose made GPUs with 80GB VRAM.

Maybe it could work for some use cases but I rather just don't use AI.

load more comments (9 replies)
load more comments (35 replies)