this post was submitted on 12 Apr 2026
110 points (97.4% liked)

Selfhosted

56957 readers
324 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Back in the day it was nice, apt get update && apt get upgrade and you were done.

But today every tool/service has it's own way to being installed and updated:

  • docker:latest
  • docker:v1.2.3
  • custom script
  • git checkout v1.2.3
  • same but with custom migration commands afterwards
  • custom commands change from release to release
  • expect to do update as a specific user
  • update nginx config
  • update own default config and service has dependencies on the config changes
  • expect new versions of tools
  • etc.

I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.

And nowadays you can't really keep running on an older version especially when it's internet facing.

So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?

top 50 comments
sorted by: hot top controversial new old
[–] mlfh@lm.mlfh.org 28 points 2 weeks ago (2 children)

Everything I run, I deploy and manage with ansible.

When I'm building out the role/playbook for a new service, I make sure to build in any special upgrade tasks it might have and tag them. When it's time to run infrastructure-wide updates, I can run my single upgrade playbook and pull in the upgrade tasks for everything everywhere - new packages, container images, git releases, and all the service restart steps to load them.

It's more work at the beginning to set the role/playbook up properly, but it makes maintaining everything so much nicer (which I think is vital to keep it all fun and manageable).

[–] non_burglar@lemmy.world 10 points 2 weeks ago

+1 for ansible.There's a module for almost everything out there.

[–] jeena@piefed.jeena.net 7 points 2 weeks ago (1 children)

Yeah, For some reason I didn't think of ansible even though I use it at work regularly. Thanks for pointing it out!

[–] Cyber@feddit.uk 5 points 2 weeks ago

Just a word of caution...

I try to upgrade 1 (of a similar group) manually first to check it'a not foobarred after the update, then crack on with the rest. Testing a restore is 1 thing, but restoring the whole system...?

[–] fhoekstra@feddit.nl 21 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Renovate + GitOps. Check out https://github.com/onedr0p/cluster-template

If you don't like Kubernetes, you can get a similar setup with doco-CD. Only limitation is that dococd can't update itself, but you can use SOPS and Renovate all the same for the other services.

[–] tofu@lemmy.nocturnal.garden 7 points 2 weeks ago

That or Komodo when using docker. Renovate is really good, you always know which version you're at, you can set it up to auto merge on minor and/or patch level, it shows you the release notes etc.

This tutorial is good: https://nickcunningh.am/blog/how-to-automate-version-updates-for-your-self-hosted-docker-containers-with-gitea-renovate-and-komodo

[–] BlackEco@lemmy.blackeco.com 2 points 2 weeks ago (1 children)

I guess auto merge isn't enabled, since there's no way to check if an update doesn't break your deployment beforehand, am I right?

[–] tofu@lemmy.nocturnal.garden 2 points 2 weeks ago (1 children)

You can configure automerge per stack and also if it's allowed on patch, minor or major upgrades.

[–] BlackEco@lemmy.blackeco.com 2 points 2 weeks ago (1 children)

Yes, but usually when you use automerge you should have set up a CI to make sure new versions don't break your software or deployment. How are you supposed to do that in a self-hosting environment?

[–] tofu@lemmy.nocturnal.garden 2 points 2 weeks ago

Ideally, you have at least two systems, test updates in the dev system and only then allow it in prod. So no auto merge in prod in this case or somehow have it check if dev worked.

Seeing which services are usually fine to update without intervening and tuning your renovate config to it should be sufficient for homelab imho.

Given that most people are running :latest and just yolo the updates with watchtower or not automated at all, some granular control with renovate is already a big improvement.

[–] totoro@slrpnk.net 16 points 2 weeks ago

Wow, that sounds like a nightmare. Here's my workflow:

nix flake update
nixos-rebuild switch

That gives me an atomic, rollbackable update of every service running on the machine.

[–] Overspark@piefed.social 11 points 2 weeks ago (1 children)

Podman automatically updates my containers for me.

[–] jeena@piefed.jeena.net 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Because you point to :latest and everything is dockerized and on one machine? How does it know when it's time to upgrade?

[–] Overspark@piefed.social 6 points 2 weeks ago (3 children)

Yeah only for :latest containers, that's true. It automatically runs a daily service to check whether there are newer images available. You can turn it off per container if you don't want it.

One of the nice things about it is that I have containers running under several different users (for security reasons) so that saves me a lot of effort switching to all these users all the time.

load more comments (3 replies)
[–] kylian0087@lemmy.dbzer0.com 6 points 2 weeks ago

FluxCD and renovate working together.

[–] halcyoncmdr@piefed.social 6 points 2 weeks ago (1 children)

All of my self-hosted systems are on a TrueNAS system and using the built-in app system (basically docker). It notifies me when they're needing updates, and has a single click update process for everything. I just login weekly to see if the button is yellow, then check on it like 15 minutes later to see if anything failed to update. Yeah they're all on the same hardware, which is probably bad, but nothing there is strictly necessary, it's all just media stuff and for fun.

The one service that is separate is Pangolin on a DigitalOcean droplet. I just handle that manually when it says there's an update. Still effectively just docker, but no easy button.

I could automate these more, but I would spend more time setting it up than I would save since it only takes me a couple minutes maybe once a week.

load more comments (1 replies)
[–] conrad82@lemmy.world 6 points 2 weeks ago

I do it manually. update the container version and docker pull and run

I have reduced the number of containers to ones i actually use, so it is manageable.

i use v2 instead of v2.1.0 docker container tags if the provider don't make too many bleeding edge changes between updates

[–] vegetaaaaaaa@lemmy.world 5 points 2 weeks ago* (last edited 2 weeks ago)
  • use APT repositories when possible -> then unattended-upgrades
  • For OCI images that do not provide tagged releases (looking at you searxng...), podman auto-update
  • for everything else, subscribe to releases RSS feed, read release notes when they come out, check for breaking changes and possibly interesting stuff, update version in ansible playbook, deploy ansible playbook
[–] ArseAssassin@sopuli.xyz 5 points 2 weeks ago

I run NixOS. Go to the flake file and update channel version.

[–] Cyber@feddit.uk 4 points 2 weeks ago

I don't use docker, etc, so for me, if it's in the normal Arch repos or AUR then I don't need to think about it until there's a .pacnew file to look at

Then, it's just the odd git pull on literally 2 devices.

All organised by ansible...

(well except the .pacnew, but I think it's nice to keep in touch with the packages)

[–] AxiomPraxis@sh.itjust.works 4 points 2 weeks ago

Kubernetes + helm charts

[–] pHr34kY@lemmy.world 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I wonder if anyone ever wrote an update aggregator that would find all package managers, containers and git repos and whatnot and just do all of them.

Some are a right pain to update, such as Nextcloud. Installing a monthly update should not feel like an enterprise prod deployment.

It's kinda ironic that package managers have caused the exact problem that they are supposed to solve.

[–] jeena@piefed.jeena.net 3 points 2 weeks ago

I am developing a script which will do that specifically for my services.

Right now at the first stage it only checks GitHub, Codeberg, etc. To check if there is a new version compared to what each service is running right now.

https://git.jeena.net/jeena/service-update-alerts

I am extending it now with a auto update part, but it's difficult because sometimes I can't just call a static script because some other migration things need to run. So I have a classifier which takes the release notes and let's a local LLM to judge if it's OK to run the automation or if I need to do it manually. But for that I am collecting old release notes as examples from each service. This takes forever to do so I only have it done for PieFed, PeerTube, Immich and open-webui, and I didn't push those changes to the public repo yet.

[–] arcine@jlai.lu 4 points 1 week ago (1 children)
# nix flake update
# nixos-rebuild switch
load more comments (1 replies)
[–] ISOmorph@feddit.org 3 points 2 weeks ago

One of the reasons I switched to YunoHost (the other being backups).

[–] Sanctus@anarchist.nexus 3 points 2 weeks ago

Damn I'm lucky I just run small game servers cause the old way still works for me, aside from piehole that needs to be updated but it squeels at me when it needs it so I dont have to remember.

[–] ryan_@piefed.social 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It’s just a hobby so i know I have room for improvement, but the bigger my environment gets the more difficult it is to keep everything completely up to date, like you said. Given that, my main priorities are:

  • have as few internet facing services as possible
  • use a reverse proxy
  • separate external and internal servers with a dmz
  • use fail2ban or crowsec on servers that have ports forwarded
  • firewall geoblocking
  • BACKUPS, local and remote

Now that being said, I’ve started to use ansible playbooks for deploying OS updates. I have a playbook that uses default options when doing an apt upgrade and it also works for the docker engine user prompt.

About 75% of my services are native installs in LXCs and I try to always install by including the app repo so that apt can update it and the other 25% are in docker. I used to use watchtower but that’s no longer maintained, so I do container updates manually as needed.

It’s not perfect, but it’s just for fun so 🤷

[–] jeena@piefed.jeena.net 2 points 2 weeks ago

Hm, I didn't think of ansible, that's something I should think about to use.

[–] irmadlad@lemmy.world 3 points 2 weeks ago

I keep it simple, although reading down through the thread, there are some really nice and ingenious ways people accomplish about the same thing, which is totally awesome. I use a WatchTower fork and run it with --run-once --cleanup. I do this when I feel comfortable that all the early adopters have done all the beta testing for me. Thanks early adopters. So, about 1 a month or so, I update 70 Docker containers. As far as OS updates, I usually hit those when they deploy. I'm running Ubuntu Jammy, so not a lot of breaking changes in updates. I don't have public facing services, and I am the only user on my network, so I don't really have to worry too much about that aspect.

[–] iamthetot@piefed.ca 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

cd appname && dockup && cd ..

Dockup being an alias for docker compose pull && docker compose up -d

Repeat for the few services I have.

[–] jeena@piefed.jeena.net 5 points 2 weeks ago (1 children)

So everything is dockerized and points to :latest?

What about the necessary changes to the docker compose files? What about changes necessary in nginx configs?

I guess you also read each release notes manually?

load more comments (1 replies)
[–] probable_possum@leminal.space 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I don't understand. docker compose up starts the container. When does the docker compose pull happen? Or is there an update directive in the compose file?

[–] iamthetot@piefed.ca 2 points 2 weeks ago (2 children)

Whoops, I forgot that the alias includes a pull for the latest versions.

load more comments (2 replies)
[–] Fedegenerate@fedinsfw.app 3 points 2 weeks ago* (last edited 2 weeks ago)

Fine, I'll be the low bar.

Proxmox, I just use the GUI to update

I use community-scripts almost exclusively. Community-scripts cron lxc updater does the heavy lifting. pct enter [lxc]

update

does a bunch of work too.

For Docker, I use a couple lxcs with Dockge on it, the "update" button takes me most of the rest of the way.

Finally, I have a couple remote machines [diet-pi]. I haven't figured out updating over tailscale yet, so I just go round semi frequently for the apt update && apt upgrade -y

VMs get the apt update && apt upgrade -y too. I keep a bare bones mint VM as a virtual laptop, as I don't have one. I'll do what I need to do and if I had to install software I'll just nuke the VM and go again from the bare bones template.

[–] ThunderComplex@lemmy.today 3 points 1 week ago (1 children)

Since all my services are dockerized I just pull new images sporadically. But I think I should invest some time into finding automatic update reminders, especially when I have to hear about critical security updates from some random person on mastodon.

[–] beeb@lemmy.zip 4 points 1 week ago (1 children)

I switched to dockhand and it handles that nicely, including scanning for vulnerabilities in new images.

load more comments (1 replies)
[–] Decronym@lemmy.decronym.xyz 3 points 2 weeks ago* (last edited 1 week ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
LXC Linux Containers
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL
k8s Kubernetes container management package
nginx Popular HTTP server

5 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

[Thread #233 for this comm, first seen 12th Apr 2026, 05:50] [FAQ] [Full list] [Contact] [Source code]

[–] ccryx@discuss.tchncs.de 2 points 2 weeks ago

All my services run in podman containers managed by systemd (using quadlets). They usually point to the :latest tag and I've configured the units to pull on start when there is a new version in my repository. Since I'm using opensuse microos, my server (and thus all services) restart regularly.

For the units that are configured differently, I update the versions in their respective ansible playbooks and redeploy (though I guess I could optimize this a bit, I've only scratched the surface of ansible).

[–] FlowerFan@piefed.blahaj.zone 2 points 1 week ago

Arcane docker server checks for updates, notifies me when they're available

for security relevant stuff I just get notifications of new github releases

[–] Alvaro@lemmy.blahaj.zone 2 points 2 weeks ago (1 children)

Personally I just wrote a bash script that does all of my regular updates and I run it manually whenever

[–] jeena@piefed.jeena.net 2 points 2 weeks ago (1 children)

And it's stable enough for you? Do you go service by service or is it good enough for everything?

[–] Alvaro@lemmy.blahaj.zone 3 points 2 weeks ago

For docker compose I have a part of the script that gets all subdirs of "projects" dir and for each one does an update (that way any new service will be updated without having manually specify in the script) for everything else I just hard coded the update process.

Generally 90% of my updates are just running the script, on the other 10% I do some manual work (like updating configs, etc)

But for the most part this is me refusing to use already existing tools that could probably do most of this better

[–] pineapple@lemmy.ml 2 points 1 week ago

Proxmox community scripts has some nice update tools

[–] Eldaroth@lemmy.world 2 points 2 weeks ago

I run most of my services in containers with Podman Quadlets. One of them is Forgejo on which I have repos for all my quadlet (systemd) files and use renovate to update the image tags. Renovate creates PRs and can also show you release notes for the image it wants you to update to.

I currently check the PRs manually as well as pulling the latest git commits on my server. But this could also be further automated to one's liking.

[–] splendid9583@kbin.earth 2 points 2 weeks ago (1 children)
[–] splendid9583@kbin.earth 2 points 2 weeks ago

Information about similar tools is available around https://en.wikipedia.org/wiki/Infrastructure_as_code#Tools

[–] ken@discuss.tchncs.de 1 points 1 week ago* (last edited 1 week ago)

A dedicated Forgejo instance f.example.com.

For a small set of trusted "base" images (e.g. docker.io/alpine and docker.io/debian): A Forgejo Action on separate small runner, scheduled on cron to sync images to f.example.com/dockerio/ using skopeo copy.

Then all other runners have their docker/podman configuration changed to use that internal forgejo container registry instead of docker.io.

Other images are built from source in the Forgejo Actions CI. Not everything needs to be (or even should) be fully automated right off. You can keep some workflows manual while starting out and then increase automation as you tighten up your setup and get more confident in it. Follow the usual best practices around security and keep permissions scoped, giving them out only as needed.

Git repos are mirrored as Forgejo repo mirrors, forked if relevant, then built with Forgejo Actions and published to f.example.com/whatever/. Rarely but sometimes is it worth spending time on reusing existing Github Workflows from upstreams. More often I find it easier to just reuse my own workflows.

This way, runners can be kept fully offline and built by only accessing internal resources:

  • apt/apk repo mirror or proxy
  • synced base container images
  • synced git sources

Same idea for npm or pypi packages etc.

Set up renovate^1^ and iterate on its configuration to reduce insanity. Look in forgejo and codeberg infra repos for examples of how to automate rebasing of forked repo onto mirrors.

I would previously achieve the same thing by wiring together more targeted services and that's still viable but Forgejo makes it easy if you want it all in one box. Just add TLS.

^1^: Or anyone have anything better that's straightforward to integrate? I'm not a huge fan of all the npm modules it pulls in or its github-centric perspective. Giving the same treatment to renovate itself here was a little bit more effort and digging than I think should really be necessary.

load more comments
view more: next ›