this post was submitted on 11 Oct 2025
13 points (93.3% liked)

Selfhosted

52284 readers
438 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have two DELL T110 servers: master server has a 4TB WD Gold pool, the other slave server has a 2.5TB of mixed WD red drives pool. Slave is switched on once a week to get some automated plication tasks over from master. Only critical dataset are replicated e.g. immich with 20 years of photos. Both servers run Truenas Scale ElectricEel-24.10.2.4. Its occurred to me that ElectricEel-24.10.2.4 does not use the ix-applications folder anymore to store installed docker images. That means that although I'm replicating the Immich dataset, I'm not replicating the docker images so if master server fails, I can't just turn on slave server. Is it possible to replicate the old ix-applications folder which btw is where?

top 3 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 2 points 4 days ago (1 children)

Replicating images isn't really best practice. Images are meant to be ephemeral on the server. Dockers pattern is to repull the images if they are needed, and that only takes a few seconds. Saving the images IMO would just be a waste of space.

If you are afraid the images will be gone someday, the proper way to handle this is to use a docker registry as a proxy. So you make your own docker registry, like your.tld/registry and then set it in proxy mode. Then when you pull your images you set docker to pull from your registry. If it's found it will use your local data otherwise it will pull through from the parent registry, and serve the docker image to your client. For backup then you backup the registry's volume.

That fits within the pattern of docker. Your clients come up, query the local registry, and it will serve your containers. Your server remains ephemeral.

[–] trilobite@lemmy.ml 1 points 4 days ago

OK, so maybe I didn't explain myself. What I meant was that I would like resilience so that if one server goes down, I've got the other to quickly fireup. Only problem is that slave sever has a smaller pool, so I can't replicate the whole pool of master server.

[–] ArchAengelus@lemmy.dbzer0.com 3 points 5 days ago

I’m not familiar with that particular tool.

If it uses docker, the most common place for the docker images and overlays is /var/lib/docker. This contains ALL docker overlays and such.

Hope this helps.

https://stackoverflow.com/a/25978888 Provides a more thorough answer.