It's just another system to maintain, another link in the chain that can fail.
I run all my services on my personal gaming pc.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
It's just another system to maintain, another link in the chain that can fail.
I run all my services on my personal gaming pc.
In my case it’s performance and sheer RAM need.
GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.
I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.
I've not cracked the docker nut yet. I don't get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven't figured out these two things yet
All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you're gold.
Look into docker compose and volumes to get an idea of where to start.
I run my NAS and Home Assistant on bare metal.
Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it's Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.
The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.
There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.
@kiol I mean, I use both. If something has a Debian package and is well-maintained, I'll happily use that. For example, prosody is packaged nicely, there's no need for a container there. I also don't want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples' mailboxes. Since I'm still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.
I would always run proxmox to set up docker VMs.
I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I'm running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it's the way k8s is designed.
It wasn't the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would've made things so much easier.
Also, why wouldn't I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups
TrueNAS is on bare metal has I have a dedicated NAS machine that's not doing everything else and also is not recommended to virtualize. Not sure if that counts.
Same for the firewall (opnsense) since it is it's own machine.
I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn't provide the packages I want to use.
Just went with arch as the host OS and firejail or lxc any processes i want contained.
I don't host on containers because I used to do OS security for a while.
Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.
I generally abstract to docker anything I don't want to bother with and just have it work.
If I'm working on something that requires lots of back and forth syncing between host and container, I'll run that on bare metal and have it talk to things in docker.
Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I'm messing with and it's direct dependencies run outside.
I have a single micro itx htpc/media server/nas in my bedroom. Why use containers?
What are you doing running your vms on bare metal? Time is a flat circle.
For me it's lack of understanding usually. I haven't sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven't gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.
I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it's not something I normally use.
I guess I just haven't been forced to see the upsides yet. But am always wanting to learn
containerisation is to applications as virtual machines are to hardware.
VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.
My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.
Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short...
Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.
My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.
This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.
It kinda stuck with me and since then I've reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It's also really convenient to have a web interface to manage the computer
Probably doesn't work for everyone but it works for me
Here’s my homelab journey: https://bower.sh/homelab
Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet
I'm running Kube on baremetal.
I'm running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I'm also using TrueNAS's internal features to host a jellyfin server and a couple of other easy to deploy containers.