this post was submitted on 12 Apr 2026
110 points (97.4% liked)

Selfhosted

56957 readers
324 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Back in the day it was nice, apt get update && apt get upgrade and you were done.

But today every tool/service has it's own way to being installed and updated:

  • docker:latest
  • docker:v1.2.3
  • custom script
  • git checkout v1.2.3
  • same but with custom migration commands afterwards
  • custom commands change from release to release
  • expect to do update as a specific user
  • update nginx config
  • update own default config and service has dependencies on the config changes
  • expect new versions of tools
  • etc.

I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.

And nowadays you can't really keep running on an older version especially when it's internet facing.

So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?

you are viewing a single comment's thread
view the rest of the comments
[–] ken@discuss.tchncs.de 1 points 1 week ago* (last edited 1 week ago)

A dedicated Forgejo instance f.example.com.

For a small set of trusted "base" images (e.g. docker.io/alpine and docker.io/debian): A Forgejo Action on separate small runner, scheduled on cron to sync images to f.example.com/dockerio/ using skopeo copy.

Then all other runners have their docker/podman configuration changed to use that internal forgejo container registry instead of docker.io.

Other images are built from source in the Forgejo Actions CI. Not everything needs to be (or even should) be fully automated right off. You can keep some workflows manual while starting out and then increase automation as you tighten up your setup and get more confident in it. Follow the usual best practices around security and keep permissions scoped, giving them out only as needed.

Git repos are mirrored as Forgejo repo mirrors, forked if relevant, then built with Forgejo Actions and published to f.example.com/whatever/. Rarely but sometimes is it worth spending time on reusing existing Github Workflows from upstreams. More often I find it easier to just reuse my own workflows.

This way, runners can be kept fully offline and built by only accessing internal resources:

  • apt/apk repo mirror or proxy
  • synced base container images
  • synced git sources

Same idea for npm or pypi packages etc.

Set up renovate^1^ and iterate on its configuration to reduce insanity. Look in forgejo and codeberg infra repos for examples of how to automate rebasing of forked repo onto mirrors.

I would previously achieve the same thing by wiring together more targeted services and that's still viable but Forgejo makes it easy if you want it all in one box. Just add TLS.

^1^: Or anyone have anything better that's straightforward to integrate? I'm not a huge fan of all the npm modules it pulls in or its github-centric perspective. Giving the same treatment to renovate itself here was a little bit more effort and digging than I think should really be necessary.