this post was submitted on 21 Feb 2026
17 points (100.0% liked)

Selfhosted

56741 readers
282 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Any experiences with a self-hosted assistant like the modern Google Assistant? Looking for something LLM-powered that is smarter than older assistants that would just try to call 3rd party tools directly and miss or misunderstand requests half of the time.

I'd like integration with a mobile app to use it from the phone and while driving. I see Home Assistant has an Android Auto integration. Has anyone used this, or another similar option? Any blatant limitations?

top 11 comments
sorted by: hot top controversial new old
[–] irotsoma@piefed.blahaj.zone 1 points 14 minutes ago

You have to run an LLM of your own and link it, if you want quality even close to approaching Google, but the Home Assistant with the Nabu Casa "Home Assistant Voice Preview Edition" speakers are working well enough for me. I don't use it for much beyond controlling my home automation components, though. But it's still very early tech anf it doesn't understand all that much unless you add a lot of your own configurations. I eventually plan to add an LLM, but even just running on the home assistant yellow hardware with a raspberry pi compute module 5 works ok for the basics though there is a slight delay.

I haven't tried, but Nabu Casa also offers a subscription service for the voice processing if you want something more robust and can't host your own LLM, but thst means sending your data out, even if they have good privacy policies, which I'm not interested in, because while I somewhat trust Nabu Casa's current business model and policies, being hosted in the US means it's susceptible to the current regime's police-state policies. I'm waiting for hardware costs to recover from the AI bubble to self host an LLM, personally.

[–] wildbus8979@sh.itjust.works 8 points 1 hour ago (1 children)

Home Assistant can absolutely do that. If you are ok with simple intent based phrasing it'll do it out of the box. If you want complex understanding and reasoning you'll have to run a local LLM, like Llama, on top of it

[–] eager_eagle@lemmy.world 1 points 1 hour ago (1 children)

yeah, that's what I'm looking for. Do you know of a way to integrate ollama with HA?

[–] lyralycan@sh.itjust.works 1 points 1 hour ago* (last edited 28 minutes ago)

I don't think there's a straightforward way like a HACS integration yet, but you can access Ollama from the web with open-webui and save the page to your homepage:

Just be warned, you'll need a lot of resources depending on which model you choose and its parameter count (4B, 7B etc) -- Gemma3 4B uses around 3GB storage, 0.5GB RAM and 4GB of VRAM to respond. It's a compromise as I can't get replacement RAM, and tends to be wildly inaccurate with large responses. The one I'd rather use, Dolphin-Mixtral 22B, takes 80GB storage and 17GB min RAM, the latter of which I can't afford to take from my other services.

Home Assistant can do that, the quality will really depend on what hardware you have to run the LLM. If you only have a CPU you'll be waiting 20 seconds for a response, which could also be pretty poor if you have to run a small quantized model

[–] Kirk@startrek.website 1 points 1 hour ago

Maybe things have improved but the last time I tried the Home Assistant er- assistant, it was garbage at anything other than the most basic commands given perfectly.