this post was submitted on 08 Jan 2026
1074 points (99.2% liked)

Technology

78543 readers
3105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SabinStargem@lemmy.today 10 points 3 days ago (4 children)

It is going to take at least five years before local AI is user-friendly enough and with performant hardware circulating, that ordinary folks would consider buying an AI machine.

I have a top-end DDR4 gaming rig. It takes a long time for 100b sized AI to give some roleplaying output, at least forty minutes for my settings via KoboldCPP with a GGUF. I don't think a typical person would want to wait more than 2 minutes for a good response. So we will need at least DDR6 era devices before it is practical for everyday people.

[–] lmuel@sopuli.xyz 16 points 3 days ago (3 children)

A local LLM is still an LLM... I don't think it's gonna be terribly useful no matter how good your hardware is

[–] luridness@lemmy.ml 4 points 3 days ago

Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for.. let’s say better predictive text.. it’s already somewhat AI based I just would like it to be. Better

load more comments (2 replies)
load more comments (2 replies)