this post was submitted on 29 Jan 2026
142 points (92.8% liked)

Selfhosted

55176 readers
1084 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my "lab" the point that when something breaks (and my wife and/or kids complain) it's more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it's a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don't have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don't seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it's never ending...

top 50 comments
sorted by: hot top controversial new old
[–] falynns@lemmy.world 6 points 2 hours ago (2 children)

My biggest problem is every docker image thinks they're a unique snowflake and how would anyone else be using such a unique port number like 80?

I know I can change, believe me I know I have to change it, but I wish guides would acknowledge it and emphasize choosing a unique port.

[–] unit327@lemmy.zip 3 points 39 minutes ago

Most put it on port 80 with the perfectly valid assumption that the user is sticking a reverse proxy in front of it. Container should expose 80 not port forward 80.

[–] lilith267@lemmy.blahaj.zone 1 points 23 minutes ago

Containers are ment to be used with docker networks making it a non-issue, most of the time you want your services to forward 80/443 since thats the default port your reverse proxy is going to call

[–] zen@lemmy.zip 4 points 3 hours ago

Yes, I get lab burnout. I do not want to be fiddling with stuff after my day job. You should give yourself a break and do something else after hours, my dude.

BUT

I do not miss GUIs. Containers are a massive win in terms because they are declarative, reproducible, and can be version controlled.

[–] Dylancyclone@programming.dev 6 points 4 hours ago* (last edited 4 hours ago) (1 children)

If you'll let me self promote for a second, this was part of the inspiration for my Ansible Homelab Orchestration project. After dealing with a lot of those projects that practically force you to read through the code to get a working environment, I wanted a way to reproducably spin up my entire homelab should I need to move computers or if my computer dies (both of which have happened, and having a setup like this helped tremendously). So far the ansible playbook supports 117 applications, most of which can be enabled with a single configuration line:

immich_enabled: true
nextcloud_enabled: true

And it will orchestrate all the containers, networks, directories, etc for you with reasonable defaults. All of which can be overwritten, for example to enable extra features like hardware acceleration:

immich_hardware_acceleration: "-cuda"

Or to automatically get a letsencrypt cert and expose the application on a subdomain to the outside world:

immich_available_externally: true

It also comes with scripts and tests to help add your own applications and ensure they work properly

I also spent a lot of time writing the documentation so no one else had to suffer through some of the more complicated applications haha (link)

Edit: I am personally running 74 containers through this setup, complete with backups, automatic ssl cert renewal, and monitoring

[–] meltedcheese@c.im 3 points 4 hours ago

@Dylancyclone @selfhosted This looks very useful. I will study your docs and see if it’s right for me. Thanks for sharing!

[–] BrightCandle@lemmy.world 6 points 5 hours ago (1 children)

I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.

[–] qaz@lemmy.world 2 points 3 hours ago

Some apps really go overboard, I tried out a bookmark collection app called Linkwarden some time ago and it needed 3 docker containers and 800MB RAM

[–] RickyRigatoni@retrolemmy.com 1 points 3 hours ago

Trying to get peertube installed just to be able to organize my video library was pain.

[–] moistracoon@lemmy.zip 1 points 4 hours ago

While I am gaining plentiful information from this comments section already, wanted to add that the IT brain drain is real and you are not alone.

[–] oeuf@slrpnk.net 1 points 4 hours ago

Check out the YUNOhost repos. If everything you need is there (or equivalents thereof), you could start using that. After running the installation script you can do everything graphically via a web UI. Mine runs for months at a time with no intervention whatsoever. To be on the safe side I make a backup before I update or make any changes, and if there is a problem just restore with a couple of clicks via my hosting control panel.

I got into it because it's designed for noobs but I think it would be great for anyone who just want to relax. Highly recommend.

[–] termaxima@slrpnk.net 1 points 3 hours ago

My advice is : just use Nix.

It always works. It does all the steps for you. You will never "forget a step" because either someone has already made a package, or you just make your own that has all the steps, and once that works, it works literally forever.

[–] friend_of_satan@lemmy.world 15 points 9 hours ago (1 children)

You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I'd like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just "make bounce" and the software starts up without me having to remember all the app-specific commands and configs.

If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called "up next" that details where you're running into challenges and need to make improvements.

[–] clif@lemmy.world 2 points 6 hours ago (1 children)

I started a blog specifically to make me document these things in a digestable manner. I doubt anyone will ever see it, but it's for me. It's a historical record of my projects and the steps and problems experienced when setting them up.

I'm using 11ty so I can just write markdown notes and publish static HTML using a very simple 11ty template. That takes all the hassle out of wrangling a website and all I have to do is markdown.

If someone stumbles across it in the slop ridden searchscape, I hope it helps them, but I know it will help me and that's the goal.

[–] moonshadow@slrpnk.net 2 points 6 hours ago

Would love to see the blog

[–] brucethemoose@lemmy.world 1 points 5 hours ago* (last edited 5 hours ago) (3 children)

I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.

[–] EncryptKeeper@lemmy.world 4 points 3 hours ago

This is a crazy take. Docker doesn’t involve much overhead. I’m not sure where your 150GB hard drive space commend comes from, as I just dozens of containers on machines with 30-50GB of hard drive space. There’s no nested computer, as docker containers are not virtualization. Containers have nothing to do with a single projects “dependency hell”, they’re for your dependency hell when trying to run a bunch of different services on one machine, or reproducing them quickly and easily across machines.

[–] zen@lemmy.zip 4 points 3 hours ago* (last edited 3 hours ago) (1 children)

Docker in and of itself is not the problem here, from my understanding. You can and should trim the container down.

Also it's not a "whole nested computer", like a virtual machine. It's only everything above the kernel, because it shares its kernel with the host. This makes them pretty lightweight.

It's sometimes even sometimes useful to run Rust or C++ code in a Docker container, for portability, provided you of course do it right. For Rust, it typically requires multiple build steps to bring the container size down.

Basically, the people making these Docker containers suck donkey balls.

Containers are great. They're a huge win in terms of portability, reproducibility, and security.

[–] brucethemoose@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

Yeah, I’m not against the idea philosophically. Especially for security. I love the idea of containerized isolation.

But in reality, I can see exactly how much disk space and RAM and CPU and bandwidth they take, heh. Maintainers just can’t help themselves.

[–] NewNewAugustEast@lemmy.zip 1 points 10 seconds ago

Want to mention some? I have no containers using that at all.

Perhaps you never clean up as you move forward? It's easy to forget to prune them.

[–] unit327@lemmy.zip 4 points 5 hours ago

As someone used to the bad old days, gimmie containers. Yes it kinda sucks but it sucks less than the alternative. Can you imagine trying to get multiple versions of postgres working for different applications you want to host on the same server? I also love being able to just use the host OS stock packages without needing to constantly compile and install custom things to make x or y work.

[–] fozid@feddit.uk 21 points 12 hours ago* (last edited 12 hours ago) (2 children)

🤮 I hate gui config! Way too much hassle. Give me cli and a config file anyday! I love being able to just ssh into my server anytime from anywhere and fix, modify or install and setup something.

The key to not being overwhelmed is manageable deployment. Only setup one service at a time, get it working, safe and reliable before switching to actually using full time, then once certain it's solid, implement the next tool or deployment.

My servers have almost no breakages or issues. They run 24/7/365 and are solid and reliable. Only time anything breaks is either an update or new service deployment, but they are just user error by me and not the servers fault.

Although I don't work in IT so maybe the small bits of maintenance I actually do feel less to me?

I have 26 containers running, plus a fair few bare metal services. Plus I do a bit of software dev as a hobby.

[–] jjlinux@lemmy.zip 3 points 10 hours ago

Story of my life (minus the dev part). I self host everything out of a Proxmox server and CasaOS for sandboxing and trying new FOSS stuff out. Unless the internet goes out, everything is up 24/7 and rarely do I need to go in there and fix something.

load more comments (1 replies)
[–] corsicanguppy@lemmy.ca 13 points 14 hours ago (1 children)

You're not alone.

The industry itself has become pointlessly layered like some origami hell. As a former OS security guy I can say it's not in a good state with all the supply-chain risks.

At the same time, many 'help' articles are karma-farming 'splogs' of low quality and/or just slop that they're not really useful. When something's missing, it feels to our imposter syndrome like it's a skills issue.

Simplify your life. Ditch and avoid anything with containers or bizarre architectures that feels too ontricate. Decide what you need and run those on really reliable options. Auto patching is your friend (but choose a distro and package format where it's atomic and rolls back easily).

You don't need to come home only to work. This is supposed to be FUN for some of us. Don't chase the Joneses, but just do what you want.

Once you've simplified, get in the habit of going outside. You'll feel a lot better about it.

[–] mrnobody@reddthat.com 3 points 10 hours ago

That's true, I've done a lot of stuff as testing that I thought would be useful services but then never really got used by me, so I didn't maintain.

I didn't take the time to really dive in and learn Docker outside of a few guides, probably why is a struggle...

load more comments
view more: next ›