Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Nothing is “docker containerized”. Docker is just a daemon and set of tools for managing OCI compliant containers.
No? If you spun up one VM in Proxmox and installed docker and used it to run 10 containers, that would use fewer system resources than running 10 LXC containers directly on Proxmox.
Like… you don’t like that the industry has adapted this efficient, portable, interchangeable, flexible, lightweight, mature technology, because you prefer the one that is heavier, less flexible, less portable, non-OCI compliant alternative?
are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?
I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT's with docker installed then running their own containers(but that's not what I do, or what I am asking for).
I currently do use one CT that has docker installed with all my docker images, which I wouldn't do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.
One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn't good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.
For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I've seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT's do as they no longer share resources). When compared to 10 CT's that are finetuned to their specific app, you will have better performance running the CT's than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).
edit: clarification and general visibility so it wasnt bunched together.
If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs, short of the fact that they can share the hosts kernel and don’t need to virtualize. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.
I mean that’s exactly what docker containers do but more efficiently.
I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.
That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.
Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.
Option B (10 LXC containers): Ten operating systems, ten separate init systems, 10 separate sets of full Linux libraries.
Option A is far more lightweight, and becomes a more attractive option the more services you add.
And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.
Like LXCs do have a use case, but it sounds like you’re using them to an alternative to regular service containers and that’s not really what it’s for.
Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.
My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.
I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT's and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.