this post was submitted on 07 Mar 2026
99 points (98.1% liked)

Selfhosted

56957 readers
612 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
(page 2) 49 comments
sorted by: hot top controversial new old
[–] Damage@feddit.it 4 points 6 days ago

Following this post I installed paperless. It's amazing.

[–] sorghum@sh.itjust.works 4 points 6 days ago (1 children)

The nextcloud AIO instance that hadn't been working since September suddenly started working after I updated it. This was all after their forums did fuck all to help except tell me to get gud. I knew the problem wasn't on me or my config and I feel so vindicated

[–] bobslaede@feddit.dk 3 points 6 days ago (1 children)

Have you had a look at opencloud? Not many addons, but simple-ish cloud drive and docs and such. Does not use many resources.

[–] sorghum@sh.itjust.works 2 points 6 days ago

I have an instance running, but haven't had a ton of time to dedicate on getting it the way I need it. I need a calendar that is accessible anonymously via the web for people to know my availability. File server, CalDAV, and CardDAV I was able to get separate solutions for.

[–] Klox@lemmy.world 4 points 6 days ago

I'm redoing everything I have from scratch. This week I have FreeIPA set up from OpenTofu + Ansible configs, and enrolls most of my other servers against FreeIPA. I am still migrating TrueNAS to use FreeIPA's Kerberos Realm for auth, and I need to chown a lot of files for the new UIDs and GIDs homed in FreeIPA. After that, I'm setting up FreeRadius for auth to switches, APs, and Wifi. And then after that, I'm back to overhauling my k8s stack. I have Talos VMs running but didn't finish patching in Cilium. And after the real fun begins.

[–] kokomo@lemmy.kokomo.cloud 3 points 5 days ago* (last edited 5 days ago)

Managed to finally get around to self-hosting ntfy, added that to uptime kuma as notifications, experimenting with Checkcle, stood up a invidious instance for funsies (prob will see how much i use it, but might as well).

[–] fleem@piefed.zeromedia.vip 3 points 6 days ago* (last edited 6 days ago) (1 children)

proxmox backups fixed!

copyparty is really REALLY cool. (i use the phi95 theme)

self hosted gitea was much easier than expected.

jellyfin updated to latest.

fixed habitica issues (gotta have my goddamn checkmarks!)

self hosted ntfy ssh login scripts EVERYWHERE

i said fuck NUT and passed battery backup straight to truenas VM, the graphs are beautiful.

ive decided that a rclone docker set up to serve webdav will be a tool i keep on all lxcs, for moving shit around easier. turn it on, move the stuff, turn back off. (i can SCP with the best of them but this is so much easier)

i want a self hosted CA 😭😭😭

[–] shark@lemmy.org 3 points 5 days ago (1 children)

copyparty is really REALLY cool. (i use the phi95 theme)

Wow. That's amazing!

i want a self hosted CA

It's totally worth it. I was putting it off for a very long time, but it was actually kind of easy.

[–] fleem@piefed.zeromedia.vip 1 points 5 days ago (1 children)

got a link? I've been falling to get vaulTLS to even start

[–] shark@lemmy.org 2 points 5 days ago (1 children)

Here’s what I went with: https://github.com/tgangte/LocalCA. I don’t know anything about VaulTLS though.

[–] fleem@piefed.zeromedia.vip 1 points 5 days ago

looks cool! I'll check it out later!

here's what i had tried a little

https://github.com/7ritn/VaulTLS

[–] silenium_dev@feddit.org 3 points 6 days ago (1 children)

I already had Keycloak set up, but a few services don't support OIDC or SAML (Jellyfin, Reposilite), so I've deployed lldap and connected those services and Keycloak to it. Now I really have a single user across all services

[–] WhyJiffie@sh.itjust.works 2 points 6 days ago* (last edited 6 days ago) (1 children)

how did tou migrate your existing accounts to this system? or did you just make a new account from scratch?

load more comments (1 replies)
[–] TheRagingGeek@lemmy.world 3 points 6 days ago

This week I saw my 3 machine cluster flailing trying to stay online, digging around identified it as an issue with communication with my NAS. It was running NFS3 and so I swapped that to NFS4.1 and did some tuning and now my services have never been faster!

[–] baller_w@lemmy.zip 2 points 5 days ago

I migrated openaw from docker running on my raspberry pi to an old nuc I had lying around. Backed it with mainly models off of OpenRouter or my local Ollama instance. For very difficult tasks it uses anthropic. Added it to my GitHub repo and implemented Plane for task management. Added a subagent for coding and have it work on touch up or research tasks I don’t have personal time to do. Made an sdlc document that it follows so I can review all of its work. Added a cron so it checks for work every hour. It ran out of tasks in five days. Work quality: C+, but it’s a hell of a lot better than having nothing.

It helped research and implement SilverBullet for personal notes management in one shot.

I also migrated all of my services’ DNS resolution to CloudFlare so I get automatic TLS handoff and set up nginx with deny rules so any app I don’t want exposed don’t get proxied.

This weekend I’m resurrecting my HomeAssistant build.

[–] GnuLinuxDude@lemmy.ml 2 points 5 days ago

I've been self-hosting for years, but with a recent move comes a recent opportunity to do my network a bit differently. I'm now running a capable OpenWRT router, and support for AdGuard Home is practically built into OpenWRT. I just needed to configure it right and set it up, but the documentation was comprehensive enough.

For years I had kept a Debian VM for Pi-Hole running. I kept it ultra lean with a cloud kernel and 3 gb of disk space and 160MB of RAM, just so it could control its own network stack. And I'd set devices to manually use its IP address to be covered. AGH seems to be about the same exact thing as Pi-Hole. With my new setup the entire network is covered automatically without having to configure any device. And yes, I know I could've done the same before by forwarding the DNS lookups to the Pi-Hole, but I was always afraid it would cause a problem for me and I'd need an easy way to back out of the adblocking. Subjectively, over about 6 years, I only had a couple worthless websites that blocked me out.

I haven't yet gotten to the point where I'm trying to also to intercept hardcoded DNS lookups, but soon... It's not urgent for me because I don't have sinister devices that do that.

[–] Kushan@lemmy.world 2 points 5 days ago

It was a couple of weeks ago for me but I managed to get my docker compose script for all my infrastructure cleaned up and all versions of containers are now pinned.

I have renovate set up to open PR's when a new version is available so I can handle updates by just accepting the PR and it's automatically deployed to my server.

Nice and easy to keep apps up to date without them randomly breaking because I didn't know if a breaking change when blindly pulling from latest.

[–] synapse1278@lemmy.world 2 points 6 days ago

Reconnected my light switches to home assistant. I just had to press the pairing button on the device again for some reason. But it's inside de Switch box in the wall, not so practical. I wich they thought of another way to put the device in pairing mode, like switch one-off 10 times, something like that.

[–] BasicallyHedgehog@feddit.uk 2 points 6 days ago (1 children)

I've been running all my apps on my NAS as docker containers, but some get 'stuck' occasionally, requiring a reboot of the whole machine. Using the NAS was mostly out of convenience.

I also had an old laptop running k3s, hosting a few stateless services.

This week I picked up three Wyse 5070 devices and started setting up a more permanent Kubernetes cluster. I decided to use Talos Linux, which is a steep learning curve, but should hopefully reduce the amount of ongoing work for upgrades. I'll be deploying everything with FluxCD this time around too.

I've stumbled a bit with the synology-csi-driver. It didn't work with Talos out of the box, but turns out the latest commits have a fix. The only thing remaining before I can start porting the apps over is figuring out how to spin up a new CA and generate client certificates for mTLS. I currently do that in Vault but it seems like something cert-manager could handle going forward.

[–] funkajunk@lemmy.world 1 points 5 days ago

I also just setup a cluster using Talos!

I've never used kubernetes before, but decided it was time to learn so I picked up 4x HP EliteDesk Mini systems and dove in.

[–] Zwuzelmaus@feddit.org 2 points 6 days ago

I have tried out Openclaw in a container, and it wasn't hard at all.

All the warnings of danger are right, though. But if anything goes wild, I still know how to delete a container :-)

[–] tophneal@sh.itjust.works 2 points 6 days ago

The table (dm) might finally make the switch from roll20 to foundry for a campaign!

[–] harsh3466@lemmy.ml 2 points 6 days ago (1 children)

I got a test box set up with nixos and a config that runs all of my services. I wanted to test the declarative rebuild promise of it, so I:

  1. Filled the services with my some of my backed up data (a copy of the data, not the actual backup)
  2. Ran it for a few days using some of the services
  3. Backed up the data of the nixos test server, as well as the nixos config
  4. Reinstalled nixos on the test box, brought in the config, and rebuilt it.

And it worked!!! All serviced came back with the data, all configuration was correct.

I'm going to keep testing, and depending on how that goes I may switch my prod server and nas to nixos.

[–] smiletolerantly@awful.systems 3 points 6 days ago (1 children)

Very cool!

Re: the backup / restore of state in NixOS: I found myself writing the same things over and over again for each VM/service, so finally wrote this wrapper module (in action e.g. here for Jellyfin), which confgures both the backup services and timers, as well as adding a simple rsync-restore-jellyfin command to the system packages. In case you find this useful and don't already have your own abstractions, or a sufficiently different use case 😄

[–] idealpink@feddit.nu 2 points 5 days ago

This is great! Thanks

[–] atzanteol@sh.itjust.works 1 points 5 days ago

This week - Apache Airflow setup to automate running backups (replacing cron).

[–] 5ymm3trY@discuss.tchncs.de 1 points 5 days ago (2 children)

Started my self-hosting journey a couple of year ago with a Raspberry Pi, OpenMediaVault and a couple of Docker containers. This week i finally managed to move my Adguard Home container and my DNS setup over to my NAS, which was the final thing that kept the Pi running. I also synched all the data to the NAS.

The next step I am trying to figure out is a decent backup setup. Read about Borg, Restic and Kopia, but haven't decided on one of them yet. What are you guys using?

[–] Cyber@feddit.uk 1 points 5 days ago (1 children)

Use the one that makes most sense to you for restores.

Backup a folder, then restore it somewhere else... if any of the applications causes you problems for your setup, move on.

[–] 5ymm3trY@discuss.tchncs.de 1 points 5 days ago (1 children)

Good point. I was going to set 1-2 of them up and find out what suits my needs.

[–] Cyber@feddit.uk 1 points 5 days ago (1 children)

Just looking at my NAS now...

I used to use Kopia to backup to a Backblaze B2 bucket, but I've moved to Restic as I can backup over ssh to a NAS at a family member's home and to a Hetzner storage box.

load more comments (1 replies)
[–] Saltarello@lemmy.world 1 points 5 days ago

I settled on Kopia myself but I always seem to see the others mentioned

[–] tofu@lemmy.nocturnal.garden 1 points 6 days ago (1 children)

Still waiting for my success. Pihole randomly doesn't answer DNS requests in time, causing a lot of trouble between my services. It's happening since I switched to dnsmasq in opnsense (which is upstream for my local domain for Pihole), but also for external domains. Can't nail it down and am this short of reconsidering my whole network setup. It used to work fine for over a year though..

Opnsense dnsmasq is DHCP for my servers and also resolves them as local hosts. (e.g. server1.local.domain) and Pihole conditionally forwards there. Since the issue is also when resolving external domains, it shouldn't be related, but the timing is suspicious. I also switched the general upstream DNS.

Pihole does have some logs indicating too many concurrent requests, but those are not always correlating with the timeouts.

I know it's DNS, I just don't know where yet.

[–] brygphilomena@lemmy.dbzer0.com 2 points 6 days ago (1 children)

Is dnsmasq rate limiting tbe pi's IP? Or is opnsense intercepting port 53 outbound and sending it to dnsmasq anyway so all pi DNS queries are being resolved in dnsmasq?

[–] tofu@lemmy.nocturnal.garden 1 points 5 days ago

Opnsense is only between the servers and the pi, the pi is in the same subnet as our consumer devices and the opnsense (directly connected to the router). The issues are both on the consumer devices and on the server, so the opnsense should not be the direct issue.

[–] Natal@lemmy.world 1 points 6 days ago

Hum. I've been smooth sailing for a while now. I've tried installing OwnTracks again and made some progress by figuring out cloud flare tunnels are a problem (at least the way I configured them). New to MQTT. So the app still doesn't work properly but now I have an idea why and I'm not just banging my head on the wall anymore.

[–] shrek_is_love@lemmy.ml 1 points 6 days ago

I got Terminus for the TRMNL set up using Podman on my server running NixOS.

Although I'm actually planning on replacing Terminus with my own simple server app that way it can be even more declarative (no Postgres database of devices/users/screens) and easier for me to customize. The API I'll have to implement is extremely straightforward, so I don't anticipate it taking too long.

[–] ragingHungryPanda@piefed.keyboardvagabond.com 1 points 6 days ago* (last edited 6 days ago)

I got gitea running on my VPs cluster that I use to host keyboard vagabond services. I moved my repository from my home PC into it, and set up an action runner to automate a build and deploy of piefed, so it runs my build script, pushes to harbor registry (internal), and then deletes and recreates a job to run db migrations and restarts the web and worker pods.

I'm going to migrate the other build services to it as well, and after that I should be able to finally get all of my services behind cloud flare tunnels and tail scale, and finally remove the last bits of ingress-nginx. The registry was the only thing still on ingress-nginx because I needed to push larger image files than are permitted by cloud flare. since all of that is internal now, I get to finally seal those bits off.

The build is also faster since I don't have to rely on wifi

load more comments
view more: ‹ prev next ›