I don't host on containers because I used to do OS security for a while.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Say more, what did that experience teach you? And, what would you do instead?
@kiol I mean, I use both. If something has a Debian package and is well-maintained, I'll happily use that. For example, prosody is packaged nicely, there's no need for a container there. I also don't want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples' mailboxes. Since I'm still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.
I have a single micro itx htpc/media server/nas in my bedroom. Why use containers?
I would always run proxmox to set up docker VMs.
I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I'm running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it's the way k8s is designed.
It wasn't the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would've made things so much easier.
Also, why wouldn't I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups
I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn't provide the packages I want to use.
Just went with arch as the host OS and firejail or lxc any processes i want contained.
I've never installed a package on proxmox.
I've BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).
What would you install on proxmox?!
Firmware update utilities, host OS file system encryption packages, HBA management tools, temperature monitoring, and then a lot of the packages had bugs that were resolved with newer versions, but proxmox only provided old versions.
For me it's lack of understanding usually. I haven't sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven't gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.
I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it's not something I normally use.
I guess I just haven't been forced to see the upsides yet. But am always wanting to learn
containerisation is to applications as virtual machines are to hardware.
VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.
When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )
Not so much a fake one but overlay the actual directory with specific needed files for that container.
Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.
So let's say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so .. what then, I link the startups, or is there a "docker config" where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?
For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
https://github.com/docker/awesome-compose/blob/master/wordpress-mysql/compose.yaml
all the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line
volumes:
- db_data:/var/lib/mysql
As the compose file will also be in home/user/Wordpress/ you can drop the common path.
That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.
Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.
TrueNAS is on bare metal has I have a dedicated NAS machine that's not doing everything else and also is not recommended to virtualize. Not sure if that counts.
Same for the firewall (opnsense) since it is it's own machine.
It depends on the service and the desired level of it stack.
I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn't really suitable for the task.
At work, I run services in docker in VMs because the benefits far outweigh the complexity.
I'm selfhosting Forgejo and i don't really see the benefit of migrating to a container, i can easily install and update it via the package manager so what benefit does containerization give?
Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can't do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.
Why could you not have that Mastodon setup in containers? Sounds normal afaik
I’ll chime in: simplicity. It's much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there's a new Mastodon update, the patch most probably will work for it too.
Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It's doable, but it's quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.
I'm running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I'm also using TrueNAS's internal features to host a jellyfin server and a couple of other easy to deploy containers.
So Truenas itself is running your containers?
Yeah, the more recent versions basically have a form of Docker as part of its setup.
I believe it's now running on Debian instead of free BSD, which probably simplified the containers set up.