this post was submitted on 29 Jan 2026
127 points (92.1% liked)

Selfhosted

55176 readers
1065 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my "lab" the point that when something breaks (and my wife and/or kids complain) it's more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it's a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don't have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don't seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it's never ending...

top 50 comments
sorted by: hot top controversial new old
[–] RickyRigatoni@retrolemmy.com 1 points 6 minutes ago

Trying to get peertube installed just to be able to organize my video library was pain.

[–] zen@lemmy.zip 1 points 23 minutes ago

Yes, I get lab burnout. I do not want to be fiddling with stuff after my day job. You should give yourself a break and do something else after hours, my dude.

BUT

I do not miss GUIs. Containers are a massive win in terms because they are declarative, reproducible, and can be version controlled.

[–] moistracoon@lemmy.zip 1 points 50 minutes ago

While I am gaining plentiful information from this comments section already, wanted to add that the IT brain drain is real and you are not alone.

[–] Dylancyclone@programming.dev 3 points 1 hour ago* (last edited 1 hour ago) (1 children)

If you'll let me self promote for a second, this was part of the inspiration for my Ansible Homelab Orchestration project. After dealing with a lot of those projects that practically force you to read through the code to get a working environment, I wanted a way to reproducably spin up my entire homelab should I need to move computers or if my computer dies (both of which have happened, and having a setup like this helped tremendously). So far the ansible playbook supports 117 applications, most of which can be enabled with a single configuration line:

immich_enabled: true
nextcloud_enabled: true

And it will orchestrate all the containers, networks, directories, etc for you with reasonable defaults. All of which can be overwritten, for example to enable extra features like hardware acceleration:

immich_hardware_acceleration: "-cuda"

Or to automatically get a letsencrypt cert and expose the application on a subdomain to the outside world:

immich_available_externally: true

It also comes with scripts and tests to help add your own applications and ensure they work properly

I also spent a lot of time writing the documentation so no one else had to suffer through some of the more complicated applications haha (link)

Edit: I am personally running 74 containers through this setup, complete with backups, automatic ssl cert renewal, and monitoring

[–] meltedcheese@c.im 3 points 1 hour ago

@Dylancyclone @selfhosted This looks very useful. I will study your docs and see if it’s right for me. Thanks for sharing!

[–] oeuf@slrpnk.net 1 points 1 hour ago

Check out the YUNOhost repos. If everything you need is there (or equivalents thereof), you could start using that. After running the installation script you can do everything graphically via a web UI. Mine runs for months at a time with no intervention whatsoever. To be on the safe side I make a backup before I update or make any changes, and if there is a problem just restore with a couple of clicks via my hosting control panel.

I got into it because it's designed for noobs but I think it would be great for anyone who just want to relax. Highly recommend.

[–] BrightCandle@lemmy.world 5 points 2 hours ago (1 children)

I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.

[–] qaz@lemmy.world 1 points 29 minutes ago

Some apps really go overboard, I tried out a bookmark collection app called Linkwarden some time ago and it needed 3 docker containers and 800MB RAM

[–] brucethemoose@lemmy.world 1 points 2 hours ago* (last edited 2 hours ago) (2 children)

I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.

[–] zen@lemmy.zip 2 points 28 minutes ago* (last edited 26 minutes ago)

Docker in and of itself is not the problem here, from my understanding. You can and should trim the container down.

Also it's not a "whole nested computer", like a virtual machine. It's only everything above the kernel, because it shares its kernel with the host. This makes them pretty lightweight.

It's sometimes even sometimes useful to run Rust or C++ code in a Docker container, for portability, provided you of course do it right. For Rust, it typically requires multiple build steps to bring the container size down.

Basically, the people making these Docker containers suck donkey balls.

Containers are great. They're a huge win in terms of portability, reproducibility, and security.

[–] unit327@lemmy.zip 3 points 2 hours ago

As someone used to the bad old days, gimmie containers. Yes it kinda sucks but it sucks less than the alternative. Can you imagine trying to get multiple versions of postgres working for different applications you want to host on the same server? I also love being able to just use the host OS stock packages without needing to constantly compile and install custom things to make x or y work.

[–] friend_of_satan@lemmy.world 14 points 6 hours ago (1 children)

You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I'd like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just "make bounce" and the software starts up without me having to remember all the app-specific commands and configs.

If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called "up next" that details where you're running into challenges and need to make improvements.

[–] clif@lemmy.world 2 points 3 hours ago (1 children)

I started a blog specifically to make me document these things in a digestable manner. I doubt anyone will ever see it, but it's for me. It's a historical record of my projects and the steps and problems experienced when setting them up.

I'm using 11ty so I can just write markdown notes and publish static HTML using a very simple 11ty template. That takes all the hassle out of wrangling a website and all I have to do is markdown.

If someone stumbles across it in the slop ridden searchscape, I hope it helps them, but I know it will help me and that's the goal.

[–] moonshadow@slrpnk.net 2 points 2 hours ago

Would love to see the blog

[–] fozid@feddit.uk 19 points 9 hours ago* (last edited 9 hours ago) (2 children)

🤮 I hate gui config! Way too much hassle. Give me cli and a config file anyday! I love being able to just ssh into my server anytime from anywhere and fix, modify or install and setup something.

The key to not being overwhelmed is manageable deployment. Only setup one service at a time, get it working, safe and reliable before switching to actually using full time, then once certain it's solid, implement the next tool or deployment.

My servers have almost no breakages or issues. They run 24/7/365 and are solid and reliable. Only time anything breaks is either an update or new service deployment, but they are just user error by me and not the servers fault.

Although I don't work in IT so maybe the small bits of maintenance I actually do feel less to me?

I have 26 containers running, plus a fair few bare metal services. Plus I do a bit of software dev as a hobby.

[–] jjlinux@lemmy.zip 3 points 7 hours ago

Story of my life (minus the dev part). I self host everything out of a Proxmox server and CasaOS for sandboxing and trying new FOSS stuff out. Unless the internet goes out, everything is up 24/7 and rarely do I need to go in there and fix something.

[–] towerful@programming.dev 1 points 5 hours ago

I love cli and config files, so I can write some scripts to automate it all.
It documents itself.
Whenever I have to do GUI stuff I always forget a step or do things out of order or something.

[–] corsicanguppy@lemmy.ca 12 points 11 hours ago (1 children)

You're not alone.

The industry itself has become pointlessly layered like some origami hell. As a former OS security guy I can say it's not in a good state with all the supply-chain risks.

At the same time, many 'help' articles are karma-farming 'splogs' of low quality and/or just slop that they're not really useful. When something's missing, it feels to our imposter syndrome like it's a skills issue.

Simplify your life. Ditch and avoid anything with containers or bizarre architectures that feels too ontricate. Decide what you need and run those on really reliable options. Auto patching is your friend (but choose a distro and package format where it's atomic and rolls back easily).

You don't need to come home only to work. This is supposed to be FUN for some of us. Don't chase the Joneses, but just do what you want.

Once you've simplified, get in the habit of going outside. You'll feel a lot better about it.

[–] mrnobody@reddthat.com 3 points 7 hours ago

That's true, I've done a lot of stuff as testing that I thought would be useful services but then never really got used by me, so I didn't maintain.

I didn't take the time to really dive in and learn Docker outside of a few guides, probably why is a struggle...

[–] HamsterRage@lemmy.ca 1 points 7 hours ago

As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.

It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.

Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.

The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an "IT Guy", although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I've been doing this for a while.

I really don't know how people without a similar level of experience can even begin to cope.

[–] atzanteol@sh.itjust.works 20 points 14 hours ago (2 children)

Sounds like you haven't taken the time to properly design your environment.

Lots of home gamers just throw stuff together and just "hack things till they work".

You need to step back and organize your shit. Develop a pattern, automate things, use source control, etc. Don't just file follow the weirdly -opinionated setup instructions. Make it fit your standard.

[–] mrnobody@reddthat.com 1 points 7 hours ago (1 children)

This. I definitely need to take the time to organize. A few months ago, I setup a new 4U rosewill case w 24 hotswap as bays. Expanded my storage quite a bit, but need to finish moving some services too. I went from a big outdated SMC server to reusing an old gaming mobo since its an i7 but 95w vs 125wx2 lol.

It took a week just to move all my Plex data cuz that Supermicro was only 1GbE.

[–] non_burglar@lemmy.world 2 points 6 hours ago (1 children)

only 1gbE

What needs more than 1gbe? Are you streaming 8k?

Sounds like you are your own worst enemy. Take a step back and think about how many of these projects are worth completing and which are just for fun and draw a line.

And automate. There are tools to help with this.

[–] WhyJiffie@sh.itjust.works 1 points 4 hours ago

What needs more than 1gbe? Are you streaming 8k?

I think they wanted to mean it was a bottleneck while moving to the new hardware

load more comments (1 replies)
[–] EncryptKeeper@lemmy.world 43 points 16 hours ago* (last edited 16 hours ago) (2 children)

If a project doesn’t make it dead simple to manage via docker compose and environment variables, just don’t use it.

I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

Sometimes you see a program and it starts with “Clone this repo” and it has a docker compose file, six env files, some extra fig files, and consists of a front end container, back end container. Database container, message queueing container, etc… just close that web page and don’t bother with that project lol.

That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole

[–] theparadox@lemmy.world 7 points 7 hours ago* (last edited 7 hours ago) (2 children)

That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole.

Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.

I work in IT and like most we're also a Windows shop. I have zero professional experience with Linux but I'm learning through my home lab while simultaneously trying extract myself from the privacy cluster fuck that is the current consumer tech industry. It's a transition and the documentation I find more or less matches the OPs experience.

I research, pick what seems to be the best for my situation (often most popular), get it working with sustainable, minimal complexity, and in short time find that some small, vital aspect of its setup (like reverse proxy) has literally zero documentation for getting it to work with some other vital part of my setup. I guess I should have made a better choice 18 months ago when I didn't expect to find this new service accessible. I find some two year old Github issue comment that allegedly solves my exact problem that I can't translate to the version I'm running because it's two revisions newer. Most other responses are incomplete, RTFM, or "git gud n00b", like your response here

Wherever you work, whatever industry, you can get burnt out. It's got nothing to do with if you've "got what it takes" or whatever bullshit you think "you’re in the wrong field of work and you’re trying to jam a square peg in a round hole" equates to.

I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

If it's that easy, then point me to where you've written about it. I'd love to learn what 100 services you've cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.

[–] WhyJiffie@sh.itjust.works 2 points 3 hours ago

Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.

why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.

but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.

working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.

[–] EncryptKeeper@lemmy.world 3 points 5 hours ago* (last edited 4 hours ago)

You’ve completely misread everything I’ve said.

Let’s make a few things clear here.

My response is not “Git gud”. My response is that sometimes there are selfhosted projects that are really cool and many people recommend, but the set up for them is genuinely more complex than it should be, and you’re better off avoiding them instead of banging your head against a wall and stressing yourself out. Selfhosting should work for you, not against you. You can always take another crack at a project later when you’ve got more hands on experience.

Secondly, it’s not a matter of whether OP “has what it takes” in his career. I simply pointed out the fact that everything he seems to hate about selfhosting, are fundamental core principals of working in IT. My response to him isn’t that he can’t hack it, it seems more like he just genuinely doesn’t like it. I’m suggesting that it won’t get better because this is what IT is. What that means to OP is up to him. Maybe he doesn’t care because the money is good which is valid. But maybe he considers eventually moving into a career he doesn’t hate, and then the selfhosting stuff won’t bother him so much. As a matter of fact, OP himself didn’t take offense to that suggestion the way you did. He agreed with my assessment.

As you learn more about self hosting, you’ll find that certain things like reverse proxy set up isn’t always included in the documentation because it’s not really a part of the project. How reverse proxies (And by extension http as a whole) work is a technology to learn on its own. I rarely have to read documentation on RP for a project because I just know how reverse proxying works. It’s not really the responsibility of a given project to tell you how to do it, unless their project has a unique gotcha involved. I do however love when they do include it, as I think that selfhosting should be more accessible to people who don’t work in IT.

If it's that easy, then point me to where you've written about it. I'd love to learn what 100 services you've cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.

Most of them TBH. I often don’t engage with a project that involves me cloning a repo because I know it means it’s going to be a finicky pain in the ass. But most things I set up were done in less than 20 minutes, including secure access from the internet using a VPS proxy with a WAF and CrowdSec, and integration with my SSO. If you want to share with me your common pain points, or want an example of what my workflow looks like let me know.

[–] mrnobody@reddthat.com 18 points 15 hours ago (1 children)

I agree with that 3rd paragraph lol. That's probably some of my issue at times. As far IT goes, does it not get overwhelming of you had a 9 hour workday just to hear someone at home complain this other thing you run doesn't work and you have to troubleshoot that now too?

Without going into too much detail, I'm a solo operation guy for about 200 end users. We're a Win11 and Office shop like most, and I've upgraded pretty much every system since my time starting. I've utilized some self-host options too, to help in the day to day which is nice as it offloads some work.

It's just, especially after a long day, to play IT at home can be a bit much. I don't normally mind, but I think I just know the Windows stuff well enough through and through, so taking on new Docker or self host tools stuff is Apple's and oranges sometimes. Maybe I'm getting spoiled with all the turn key stuff at work, too.

[–] EncryptKeeper@lemmy.world 2 points 4 hours ago* (last edited 4 hours ago)

I’m an infrastructure guy, I manage a few datacenters that host some backends for ~100,000 IoT devices and some web apps that serve a few million requests a day each. It sounds like a lot, but the only real difference between my work and yours is that at the scale I’m working with, things have to be built in a way that they run uninterrupted with as little interaction from me as possible. You see fewer GUIs, and things stop being super quick and easy to initially get up and running, but the extra effort spent architecting things right rewards you with a much lighter troubleshooting and firefighting workload.

You sorta stop being a mechanic that maintenances and fixes problem cars, and start being an engineer that builds cars to have as few problems as possible. You lose the luxury of being able to fumble around under a car and visually find an oil filter to change, and start having to make decisions on where to put the oil filter from scratch, but to me it is far more rewarding and satisfying. And ultimately the way that self hosting works these days, it has embraced the latter over the former. It’s just a different mindset from the legacy click-ops sysadmin days of IT.

What this looks like to me in your example is, when I have users of my selfhosted stuff complain about something not working, I’m not envisioning yet another car rolling into the shop for me to fix. I envision a puzzle that must be solved. Something that needs optimization or rearchitecting that will make the problem that user had go away, or at the very least fix itself, or alert me so I can fix it before the user complains.

This paradigm I work under is more work, but the work is rewarding and it’s “fun” when I identify a problem that needs solving and solve it. If that isn’t “fun” to you, then all you’re left is the bunch more work part.

So ultimately what you need to figure out is what your goal is. If you’re not interested in this new paradigm and you just want turnkey solutions there are ways of self hosted that are more suited to that mindset. You get less flexibility, but there’s less work involved. And to be clear there’s absolutely nothing wrong with that. At the end of the day you have to do what works for you.

My recommendations to you assuming you just want to self hosted with as little work and maintenance as possible:

  • Stick with projects that are simple to set up and are low maintenance. If a project seems like a ton of work get going, just don’t use it. Take the time to shop around for something simpler. Even I do this a lot.
  • Try some more turn key self hosting solutions. Anything with an App Store for applications. UnRAID, CasaOS, things of that nature that either have one click deploy apps, or at least have pre-filled templates where all you need to do is provide a couple variable values. You won’t learn as much career wise this way, but it’ll take a huge mental load off.
  • When it comes to tools your family is likely to depend on and thus complain about, instead of selfhosting those things perhaps look for a non-big tech alternative. For example, self hosting email can be a lot of work. But you don’t have to use Gmail either. Move your family to ProtonMail or Tutanota, or other similar privacy friendly alternatives. Leave your self hosting for less critical apps that nobody will really care if it goes down and you can fix at your leisure.
[–] Strider@lemmy.world 4 points 11 hours ago

It's a mess. I'm even moving to a different field in it due to this.

[–] Decronym@lemmy.decronym.xyz 11 points 14 hours ago* (last edited 18 minutes ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
IoT Internet of Things for device controllers
LAMP Linux-Apache-MySQL-PHP stack for webhosting
LXC Linux Containers
Plex Brand of media server package
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SSO Single Sign-On
VPS Virtual Private Server (opposed to shared hosting)

[Thread #40 for this comm, first seen 29th Jan 2026, 05:20] [FAQ] [Full list] [Contact] [Source code]

[–] chrash0@lemmy.world 18 points 16 hours ago (3 children)

honestly, i 100% do not miss GUIs that hopefully do what you want them to do or have options grayed out or don’t include all the available options etc etc

i do get burnout, and i suffer many of the same symptoms. but i have a solution that works for me: NixOS

ok it does sound like i gave you more homework, but hear me out:

  • with NixOS and flakes you have a commit history for your lab services, all centralized in one place.
  • this can include as much documentation as you want: inline comments, commit messages, living documents in your repository, whatever
  • even services that only provide a Docker based solution can be encapsulated and run by Nix, including using an alternate runtime like podman or containerd
  • (this one will hammer me with downvotes but i genuinely do think that:) you can use an LLM agent like GitHub Copilot to get you started, learn the Nix language and ecosystem, and create Nix modules for things that need to be wrapped. i’ve been a software engineer for 15 years; i’ve got nothing to prove when it comes to making a working system. what i want is a working system.
load more comments (3 replies)
[–] krashmo@lemmy.world 21 points 17 hours ago (4 children)

Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.

[–] WhyJiffie@sh.itjust.works 1 points 3 hours ago

just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there's an ages old issue of this on their github repo, closed because they don't care.

It's matomo analytics, so not as bad as some big tech, but still.

load more comments (3 replies)
[–] Pika@sh.itjust.works 18 points 17 hours ago* (last edited 17 hours ago) (5 children)

I'm sick of everything moving to a docker image myself. I understand on a standard setup the isolation is nice, but I use Proxmox and would love to be able to actually use its isolation capabilities. The environment is already suited for the program. Just give me a standard installer for the love of tech.

[–] WhyJiffie@sh.itjust.works 1 points 3 hours ago (1 children)

unless you have zillion gigabytes of RAM, you really don't want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can't, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.

I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.

[–] Pika@sh.itjust.works 1 points 2 hours ago

For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it's going to use that amount (and will crash if it can't get that amount)

These CT's do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.

For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I'm going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)

Also for the over committing thing, be aware that your issue you've stated there will happen with a Docker setup as well. Docker doesn't care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it's just ephemeral instead of persistent, And designed to be plug-and-go, which I've found in the case of running a Proxmox-style setup, isn't super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.

[–] exu@feditown.com 4 points 13 hours ago (1 children)

You can still use VMs and do containers in there. That's what I do, makes separating different services very easy.

[–] Pika@sh.itjust.works 1 points 2 hours ago

This is what I currently do with non-specialized services that require Docker. I have one container, which runs Docker Engine, and I throw everything on there, and then if I have a specialized container that needs Docker, I will still run its own CT. But then I use Docker Agent, So I can use one administration panel.

It's just annoying because I would rather just remove Docker from the situation because when you're running Proxmox, you're essentially running a virtualized system in a virtualized system because you have Proxmox, which is the bare bones running a virtualized environment for the container, which is then running a virtualized environment for the Docker container.

load more comments (3 replies)
[–] hesh@quokk.au 24 points 18 hours ago* (last edited 18 hours ago) (2 children)

I wouldn't say im stick of it, but it can be a lot of work. It can be frustrating at times, but also rewarding. Sometimes I have to stop working on it for a while when I get stuck.

In any case, I like it a lot better than being Google's bitch.

load more comments (2 replies)
[–] pHr34kY@lemmy.world 4 points 13 hours ago* (last edited 13 hours ago)

I deliberately have not used docker at home to avoid complications. Almost every program is in a debian/apt repo, and I only install frontends that run on LAMP. I think I only have 2 or 3 apps that require manual maintenance (apart from running "apt upgrade"). NextCloud is 90% of the butthurt.

I'm starting to turn off services on IPv4 to reduce the network maintenance overhead.

[–] pathos@lemmy.ml 7 points 16 hours ago (1 children)

Not trying to start any measuring contest, but what I've learned is that there are always people out there that does things 100x more than I do. So yes, 1500 Docker composes are a thing, and I've witnessed some composes with over 10k lines.

load more comments (1 replies)
load more comments
view more: next ›