this post was submitted on 08 Jul 2025
105 points (97.3% liked)

Selfhosted

49386 readers
568 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you're not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I'm trying to show off Smart Search to a friend, or just need a meme pronto.

That'll be all.

top 42 comments
sorted by: hot top controversial new old
[–] i_am_not_a_robot@discuss.tchncs.de 25 points 3 days ago (1 children)

It's not normal for - model-cache:/cache to be deleted on restart or even upgrade. You shouldn't need to do this.

[–] avidamoeba@lemmy.ca 7 points 3 days ago* (last edited 3 days ago)

Yes, it depends on how you're managing the service. If you're using one of the common patterns via systemd, you may be cleaning up everything, including old volumes, like I do.

E: Also if you have any sort of lazy prune op running on a timer, it could blow it up at some point.

[–] MangoPenguin@lemmy.blahaj.zone 13 points 3 days ago* (last edited 3 days ago) (1 children)

Doing a volume like the default Immich docker-compose uses should work fine, even through restarts. I'm not sure why your setup is blowing up the volume.

Normally volumes are only removed if there is no running container associated with it, and you manually run docker volume prune

[–] avidamoeba@lemmy.ca 3 points 3 days ago (3 children)

Because I clean everything up that's not explicitly on disk on restart:

[Unit]
Description=Immich in Docker
After=docker.service 
Requires=docker.service

[Service]
TimeoutStartSec=0

WorkingDirectory=/opt/immich-docker

ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
ExecStartPre=-/usr/bin/docker compose down --remove-orphans
ExecStartPre=-/usr/bin/docker compose rm -f -s -v
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up

Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target
[–] waitmarks@lemmy.world 10 points 3 days ago (1 children)

But why?

why not just down up normally and have a cleanup job on a schedule to get rid of any orphans?

[–] corsicanguppy@lemmy.ca 5 points 3 days ago

But why?

I a world where we can't really be sure what's in an upgrade, a super-clean start that burns any ephemeral data is about the best way to ensure a consistent start.

And consistency gives reliability, as much as we can get without validation (validation is "compare to what's correct", but consistency is "try to repeat whatever it was").

[–] PieMePlenty@lemmy.world 6 points 3 days ago (1 children)

Wow, you pull new images every time you boot up? Coming from a mindset of having rock solid stability, this scares me. You're living your life on the edge my friend. I wish I could do that.

[–] avidamoeba@lemmy.ca 3 points 3 days ago* (last edited 3 days ago) (1 children)

I use a fixed tag. 😂 It's more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn't change, the pull is a noop.

[–] PieMePlenty@lemmy.world 2 points 3 days ago

Ahh, calmed me down. Never thought of doing anything like you're doing it here, but I do like it.

[–] MangoPenguin@lemmy.blahaj.zone 1 points 2 days ago* (last edited 2 days ago) (1 children)

That's wild! What advantage do you get from it, or is it just because you can for fun?

Also I've never seen a service created for each docker stack like that before..

[–] avidamoeba@lemmy.ca 1 points 2 days ago (1 children)

Well, you gotta start it somehow. You could rely on compose'es built-in service management which will restart containers upon system reboot if they were started with -d, and have the right restart policy. But you still have to start those at least once. How'd you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d. But then I'm splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down. Not great. Instead I'd write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that's kinda what I'm doing isn't it? Except if I start it with docker compose up without -d, I don't need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald too, and I can use systemd's restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It's way more powerful than compose's restart policy. Finally, I like to clean up any data I haven't explicitly intended to persist across service restarts so that I don't end up in a situation where I'm debugging an issue that manifests itself because of some persisted piece of data I'm completely unaware of.

Interesting, waiting on network mounts could be useful!

I deploy everything through Komodo so it's handling the initial start of the stack, updates, logs, etc..

[–] avidamoeba@lemmy.ca 16 points 3 days ago* (last edited 3 days ago) (2 children)

Oh, and if you haven't changed from the default ML model, please do. The results are phenomenal. The default is nice but only really needed on really low power hardware. If you have a notebook/desktop class CPU and/or GPU with 6GB+ of RAM, you should try a larger model. I used the best model they have and it consumes around 4GB VRAM.

[–] apprehensively_human@lemmy.ca 8 points 3 days ago (1 children)

Which model would you recommend? I just switched from ViT-B/32 to ViT-SO400M-16-SigLIP2-384__webli since it seemed to be the most popular.

[–] avidamoeba@lemmy.ca 9 points 3 days ago (1 children)

I switched to the same model. It's absolutely spectacular. The only extra thing I did was to increase the concurrent job count for Smart Search and to give the model access to my GPU which sped up the initial scan at least an order of magnitude.

[–] apprehensively_human@lemmy.ca 3 points 2 days ago* (last edited 1 day ago)

Seems to work really well. I can do obscure searches like Outer Wilds and it will pull up pictures I took from my phone of random gameplay moments, so it's not doing any filename or metadata cheating there.

[–] Showroom7561@lemmy.ca 2 points 3 days ago (1 children)

Is this something that would be recommended if self-hosting off a Synology 920+ NAS?

My NAS does have extra ram to spare because I upgraded it, and has NVME cache 🤗

[–] avidamoeba@lemmy.ca 2 points 3 days ago* (last edited 3 days ago) (1 children)

That's a Celeron right? I'd try a better AI model. Check this page for the list. You could try the heaviest one. It'll take a long time to process your library but inference is faster. I don't know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There's execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.

[–] Showroom7561@lemmy.ca 3 points 3 days ago (2 children)

That’s a Celeron right?

Yup, the Intel J4125 Celeron 4-Core CPU, 2.0-2.7Ghz.

I switched to the ViT-SO400M-16-SigLIP2-384__webli model, same as what you use. I don't worry about processing time, but it looks like a more capable model, and I really only use immich for contextual search anyway, so that might be a nice upgrade.

[–] iturnedintoanewt@lemmy.world 1 points 3 days ago* (last edited 3 days ago) (1 children)

What's your consideration for choosing this one? I would have thought ViT-B-16-SigLIP2__webli to be slightly more accurate, with faster response and all that while keeping a slightly less RAM consumption (1.4GB less I think).

[–] Showroom7561@lemmy.ca 3 points 3 days ago

Seemed to be the most popular. LOL The smart search job hasn't been running for long, so I'll check that other one out and see how it compares. If it looks better, I can easily use that.

[–] avidamoeba@lemmy.ca 1 points 3 days ago (1 children)

Did you run the Smart Search job?

[–] Showroom7561@lemmy.ca 2 points 3 days ago (1 children)
[–] avidamoeba@lemmy.ca 1 points 3 days ago (1 children)

Let me know how inference goes. I might recommend that to a friend with a similar CPU.

[–] Showroom7561@lemmy.ca 2 points 3 days ago (1 children)

I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.

Out of 140,000+ photos/videos, it's down to 104,000 and I have it set to 6 concurrent tasks.

I don't mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I'm still able to use the app and website to navigate and search without any delays.

[–] avidamoeba@lemmy.ca 1 points 2 days ago (2 children)

Let me know how the search performs once it's done. Speed of search, subjective quality, etc.

[–] Showroom7561@lemmy.ca 2 points 1 day ago (1 children)

OK, indexing finished some time yesterday and I ran a few searches like:

"Child wearing glasses indoors"

"Cars with no wheels"

"Woman riding a bike"

Results come up (immich on android) in three seconds.

But the quality of the results do appear to be considerably better with ViT-B-16-SigLIP2__webli compared to the default model.

I'm pretty happy. 👍

[–] avidamoeba@lemmy.ca 1 points 1 day ago* (last edited 1 day ago) (1 children)

Nice. So this model is perfectly usable by lower end x86 machines.

I discovered that the Android app shows results a bit slower than the web. The request doesn't reach Immich during the majority of the wait. I'm not sure why. When searching from the web app, the request is received by Immich immediately.

[–] Showroom7561@lemmy.ca 2 points 23 hours ago

Interesting, it's slightly slower for me through the web interface both with a direct connect to my network, or when proxied through the internet. Still, we're talking seconds here, and the results are so accurate!

Immich has effectively replaced the (expensive) Windows software Excire Foto, which I was using for on-device contextual search because Synology Photos search just sucks. Excire isn't ideal to run from Linux because it has to be done through a VM, so I'm happy to self-host Immich and be able to use it even while out of the house.

[–] Showroom7561@lemmy.ca 1 points 2 days ago

Search speed was never an issue before, and neither was quality. My biggest gripe is not being able to sort search by date! If I had that, it would be perfect.

But I'll update you once it's done (at 97,000 to go... )

[–] wabasso@lemmy.ca 3 points 3 days ago (2 children)

Ok I should know this by now, but what actually is the current “./“ directory when you use that? Is it the docker daemons start dir like /var/docker ?

[–] mhzawadi@lemmy.horwood.cloud 7 points 3 days ago (2 children)

./ will be the directory you run your compose from

[–] elmicha@feddit.org 1 points 3 days ago

I'm almost sure that ./ is the directory of the compose.yaml.

Normally I just run docker compose up -d in the project directory, but I could run docker compose up -d -f /somewhere/compose.yaml from another directory, and then the ./ would be /somewhere, and not the directory where I started the command.

[–] SheeEttin@lemmy.zip 0 points 3 days ago (3 children)

That seems like a bad idea

[–] MangoPenguin@piefed.social 8 points 3 days ago (1 children)

Its convenient because your data is stored in the same folder that your docker-compose.yaml file is in, making backups or migrations simpler.

[–] avidamoeba@lemmy.ca 2 points 3 days ago

Yup. Everything is in one place and there's no hardcoded paths outside of the work dir making it trivial to move across storage or even machines.

[–] ohshit604@sh.itjust.works 4 points 3 days ago* (last edited 3 days ago) (1 children)

As other stated it’s not a bad way of managing volumes. In my scenario I store all volumes in a /config folder.

For example on my SearXNG instance I have a volume like such:

services:
  searxng:
    …
    volumes:
      - ./config/searx:/etc/searxng:rw

This makes the files for SearXNG two folders away. I also store these in the /home/YourUser directory so docker avoids using sudoers access whenever possible.

[–] SheeEttin@lemmy.zip 1 points 3 days ago (1 children)

So why would you not write out the full path? I frequently rerun compose commands from various places, if I'm troubleshooting an issue.

[–] ohshit604@sh.itjust.works 3 points 3 days ago* (last edited 3 days ago) (1 children)

So why would you not write out the full path?

The other day my raspberry pi decided it didn’t want to boot up, I guess it didn’t like being hosted on an SD card anymore, so I backed up my compose folder and reinstalled Rasp Pi OS under a different username than my last install.

If I specified the full path on every container it would be annoying to have to redo them if I decided I want to move to another directory/drive or change my username.

[–] SheeEttin@lemmy.zip 1 points 3 days ago

I'd just do it with a simple search and replace. Have done. I feel like relative paths leave too much room for human error.

[–] napkin2020@sh.itjust.works 0 points 3 days ago
[–] ShortN0te@lemmy.ml 2 points 3 days ago

It usually is the directory where you execute the docker compose command.