this post was submitted on 08 Jan 2026
1074 points (99.2% liked)
Technology
78543 readers
3105 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It is going to take at least five years before local AI is user-friendly enough and with performant hardware circulating, that ordinary folks would consider buying an AI machine.
I have a top-end DDR4 gaming rig. It takes a long time for 100b sized AI to give some roleplaying output, at least forty minutes for my settings via KoboldCPP with a GGUF. I don't think a typical person would want to wait more than 2 minutes for a good response. So we will need at least DDR6 era devices before it is practical for everyday people.
A local LLM is still an LLM... I don't think it's gonna be terribly useful no matter how good your hardware is
Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for.. let’s say better predictive text.. it’s already somewhat AI based I just would like it to be. Better