this post was submitted on 24 Sep 2025
154 points (94.8% liked)

Selfhosted

53861 readers
345 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

(page 3) 50 comments
sorted by: hot top controversial new old
[–] TheMightyCat@ani.social 2 points 3 months ago (1 children)

I'm selfhosting Forgejo and i don't really see the benefit of migrating to a container, i can easily install and update it via the package manager so what benefit does containerization give?

load more comments (1 replies)
[–] tychosmoose@lemmy.world 2 points 3 months ago

I'm doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don't know.

The migration path I'm working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I'll probably get it rolling on Debian 13 soon.

[–] Jerry@feddit.online 2 points 3 months ago (2 children)

Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can't do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.

load more comments (2 replies)
[–] 51dusty@lemmy.world 2 points 3 months ago (2 children)

my two bare metal servers are the file server and music server. I have other services in a pi cluster.

file server because I can't think of why I would need to use a container.

the music software is proprietary and requires additional complications to get it to work properly...or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

IMO the only reliable method for containers is a cluster because if you're running several containers on a device and it fails you've lost several services.

load more comments (2 replies)
[–] eleitl@lemmy.zip 2 points 2 months ago

Obviously, you host your own hypervisor on own or rented bare metal.

[–] frezik@lemmy.blahaj.zone 2 points 2 months ago (1 children)

My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.

OPNsense is its own box because I prefer to separate it for security reasons.

Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.

load more comments (1 replies)
[–] SailorFuzz@lemmy.world 2 points 2 months ago (4 children)

Mainly that I don't understand how to use containers... or VMs that well... I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on... HomeAssistant, JellyFin etc...

I got Proxmox installed on it, I can access it.... I don't know what the fuck I'm doing... There was a website that allowed you to just run scripts on shell to install a lot of things... but now none of those work becuase it says my version of Proxmox is wrong (when it's not?)... so those don't work....

And at least VMs are easy(ish) to understand. Fake computer with OS... easy. I've built PCs before, I get it..... Containers just never want to work, or I don't understand wtf to do to make them work.

I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool).... wanted to use a container because a service that simple doesn't feel like it needs a whole VM..... but it won't work...

load more comments (4 replies)
[–] pineapplelover@lemmy.dbzer0.com 2 points 2 months ago

All I have is Minecraft and a discord bot so I don't think it justifies vms

[–] bhamlin@lemmy.world 2 points 2 months ago

It depends on the service and the desired level of it stack.

I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn't really suitable for the task.

At work, I run services in docker in VMs because the benefits far outweigh the complexity.

[–] jet@hackertalks.com 1 points 2 months ago

KISS

The more complicated the machine the more chances for failure.

Remote management plus bare metal just works, it's very simple, and you get the maximum out of the hardware.

Depending on your use case that could be very important

[–] misterbngo@awful.systems 1 points 2 months ago (2 children)

Your phrasing of the question implies a poor understanding. There's nothing preventing you from running containers on bare metal.

My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.

I think you're actually asking why folks would use bare metal instead of cloud and here's the truth. You're paying for that resiliency even if you don't need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I've stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›