this post was submitted on 24 Sep 2025
154 points (94.8% liked)

Selfhosted

53833 readers
318 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Evotech@lemmy.world 4 points 2 months ago

It's just another system to maintain, another link in the chain that can fail.

I run all my services on my personal gaming pc.

[–] brucethemoose@lemmy.world 4 points 2 months ago* (last edited 2 months ago) (2 children)

In my case it’s performance and sheer RAM need.

GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

load more comments (2 replies)
[–] pedro@lemmy.dbzer0.com 4 points 2 months ago* (last edited 2 months ago) (4 children)

I've not cracked the docker nut yet. I don't get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven't figured out these two things yet

[–] Passerby6497@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you're gold.

Look into docker compose and volumes to get an idea of where to start.

load more comments (3 replies)
[–] lka1988@lemmy.dbzer0.com 4 points 2 months ago (3 children)

I run my NAS and Home Assistant on bare metal.

  • NAS: OMV on a Mac mini with a separate drive case
  • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it's Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

load more comments (3 replies)
[–] medem@lemmy.wtf 4 points 2 months ago

The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

[–] hperrin@lemmy.ca 3 points 2 months ago

There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

[–] Andres4NY@social.ridetrans.it 3 points 2 months ago (3 children)

@kiol I mean, I use both. If something has a Debian package and is well-maintained, I'll happily use that. For example, prosody is packaged nicely, there's no need for a container there. I also don't want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples' mailboxes. Since I'm still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.

load more comments (3 replies)
[–] towerful@programming.dev 3 points 2 months ago (2 children)

I would always run proxmox to set up docker VMs.

I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I'm running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it's the way k8s is designed.
It wasn't the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would've made things so much easier.

Also, why wouldn't I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups

load more comments (2 replies)
[–] tofu@lemmy.nocturnal.garden 3 points 2 months ago (2 children)

TrueNAS is on bare metal has I have a dedicated NAS machine that's not doing everything else and also is not recommended to virtualize. Not sure if that counts.

Same for the firewall (opnsense) since it is it's own machine.

load more comments (2 replies)
[–] 9tr6gyp3@lemmy.world 3 points 2 months ago (3 children)

I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn't provide the packages I want to use.

Just went with arch as the host OS and firejail or lxc any processes i want contained.

load more comments (3 replies)
[–] corsicanguppy@lemmy.ca 3 points 2 months ago (1 children)

I don't host on containers because I used to do OS security for a while.

load more comments (1 replies)
[–] Kurious84@lemmings.world 3 points 2 months ago

Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.

[–] jaemo@sh.itjust.works 3 points 2 months ago

I generally abstract to docker anything I don't want to bother with and just have it work.

If I'm working on something that requires lots of back and forth syncing between host and container, I'll run that on bare metal and have it talk to things in docker.

Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I'm messing with and it's direct dependencies run outside.

[–] akincisor@sh.itjust.works 3 points 2 months ago (2 children)

I have a single micro itx htpc/media server/nas in my bedroom. Why use containers?

load more comments (2 replies)
[–] Surp@lemmy.world 3 points 2 months ago (1 children)

What are you doing running your vms on bare metal? Time is a flat circle.

load more comments (1 replies)
[–] LifeInMultipleChoice@lemmy.world 3 points 2 months ago (1 children)

For me it's lack of understanding usually. I haven't sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven't gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.

I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it's not something I normally use.

I guess I just haven't been forced to see the upsides yet. But am always wanting to learn

[–] slazer2au@lemmy.world 3 points 2 months ago (5 children)

containerisation is to applications as virtual machines are to hardware.

VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.

load more comments (5 replies)
[–] DarkMetatron@feddit.org 3 points 2 months ago

My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short...

[–] kossa@feddit.org 2 points 2 months ago

Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

[–] OnfireNFS@lemmy.world 2 points 2 months ago

This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

It kinda stuck with me and since then I've reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It's also really convenient to have a web interface to manage the computer

Probably doesn't work for everyone but it works for me

[–] erock@lemmy.ml 2 points 2 months ago

Here’s my homelab journey: https://bower.sh/homelab

Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

[–] Routhinator@startrek.website 2 points 2 months ago

I'm running Kube on baremetal.

[–] bizarroland@lemmy.world 2 points 2 months ago (2 children)

I'm running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I'm also using TrueNAS's internal features to host a jellyfin server and a couple of other easy to deploy containers.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›