I'm running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I'm also using TrueNAS's internal features to host a jellyfin server and a couple of other easy to deploy containers.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
So Truenas itself is running your containers?
Yeah, the more recent versions basically have a form of Docker as part of its setup.
I believe it's now running on Debian instead of free BSD, which probably simplified the containers set up.
I'm selfhosting Forgejo and i don't really see the benefit of migrating to a container, i can easily install and update it via the package manager so what benefit does containerization give?
I'm doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don't know.
The migration path I'm working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I'll probably get it rolling on Debian 13 soon.
Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can't do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.
my two bare metal servers are the file server and music server. I have other services in a pi cluster.
file server because I can't think of why I would need to use a container.
the music software is proprietary and requires additional complications to get it to work properly...or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.
if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.
IMO the only reliable method for containers is a cluster because if you're running several containers on a device and it fails you've lost several services.
Obviously, you host your own hypervisor on own or rented bare metal.
My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.
OPNsense is its own box because I prefer to separate it for security reasons.
Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.
Mainly that I don't understand how to use containers... or VMs that well... I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on... HomeAssistant, JellyFin etc...
I got Proxmox installed on it, I can access it.... I don't know what the fuck I'm doing... There was a website that allowed you to just run scripts on shell to install a lot of things... but now none of those work becuase it says my version of Proxmox is wrong (when it's not?)... so those don't work....
And at least VMs are easy(ish) to understand. Fake computer with OS... easy. I've built PCs before, I get it..... Containers just never want to work, or I don't understand wtf to do to make them work.
I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool).... wanted to use a container because a service that simple doesn't feel like it needs a whole VM..... but it won't work...
I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.
Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.
Just don't expose things to the internet until you understand the risks and don't check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I'm still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.
I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.
Proxmox and Docker serve different purposes. They aren't mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.
I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.
The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.
Fair point. I'm 12 years into my own self-hosting journey, I guess it's easy to forget that haha.
When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.
All I have is Minecraft and a discord bot so I don't think it justifies vms
It depends on the service and the desired level of it stack.
I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn't really suitable for the task.
At work, I run services in docker in VMs because the benefits far outweigh the complexity.
Pure bare metal is crazy to me. I run proxmox and mount my storage there, and from there it is shared to machines that need it. It would be convenient to do a pass through to TrueNAS, for some of the functions it provides but I don’t trust that my skills for that. I’d have kept TrueNAS on bare metal, but I need so little horsepower for my services that it would be a waste. I don’t think the trade offs of having TrueNAS run my virtualisation environment were really worth it.
My router is bare metal. It’s much simpler to handle the networking with a single physical device like that. Again, it would be convenient to set up opnsense in a VM for failover. but it introduces a bunch of complexity I don’t want or really need. The router typically goes down only for maintenance, not because it crashed or something. I don’t have redundant power or ISPs either.
To me, docker is an abstraction layer I don’t need. VMs are good enough, and proxmox does a good job with LXCs so far.
Why would I spin up a VM and virtual network within that vm and then a container when I can just spin up a VM?
I’ve not spent time learning Docker or k8s; it seems very much a tool designed for a scale that most companies don’t operate at let alone my home lab.
Your phrasing of the question implies a poor understanding. There's nothing preventing you from running containers on bare metal.
My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.
I think you're actually asking why folks would use bare metal instead of cloud and here's the truth. You're paying for that resiliency even if you don't need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I've stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.