this post was submitted on 30 Nov 2025
22 points (95.8% liked)

Selfhosted

53304 readers
1206 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.nocturnal.garden/post/387129

Hi, I've had issues for the last days where my services were unreachable via their domains sporadically. They are scattered across 2-3 VMs which are working fine and can be reached by their domain (usually x.my.domain subdomains) via my nginx reverse proxy (running in it's own Debian vm). The services themself were running fine. My monitoring (Node Exporter/Prometheus) notified me that the conntrack limit on the nginx vm was reached in the timeframes where my services weren't reachable, so that seems to be the obvious issue.

As for the why, it seems that my domains are known to more spammers/scripters now. The nginx error.log grew by factor 100 from one day to the next. Most of my services are restriced to local IPs, but some like this lemmy instance are open entirely (nginx vm has port 80 and 443 forwarded).

I never heard of conntrack before but tried to read up on it a bit. It keeps track of the vm's connections. The limit seems to be rather low, apparently it depends on the memory of the vm which is also low. I can increase the memory and the limit, but some posts suggest to generally disable it if not stricly needed. The vm is doing nothing but reverse proxying so I'm not sure if I really need it. I usually stick to Debians defauls though. Would appreciate input on this as I don't really see what the conseqences of this would be. Can it really just be disabled?

But that's just making symptons go away and I'd like to stop the attackers even before reaching the vm/nginx. I basically have 2 options.

  • The vm has ufw enabled and I can set up fail2ban (should've done that earlier). However, I'm not sure if this helps with the conntrack thing since they need to make a connection before getting f2b'd and that will stay in the list for a bit.
  • There's an OPNsense between the router and the nginx vm. I have to figure out how, but I bet there's a possibility to subscribe to known-attacker-IP-lists and auto-block or the like. I'd like some transparency here though and also would want to see which of the blocked IPs actually try to get in.

Would appreciate thoughts or ideas on this!

you are viewing a single comment's thread
view the rest of the comments
[–] Mikelius@lemmy.ml 2 points 2 days ago

I'd hesitate disabling it altogether, unless you're absolutely certain nothing will need it. One suggestion I haven't seen mentioned is looking at the other sysctl options that might be tweaked. Check with netstat how many of those connections are stuck in established, close wait, time waiting, etc. It's possible you just need to lower the default values of things like nf_conntrack_tcp_timeout_established, for example. https://www.kernel.org/doc/html/latest/networking/nf_conntrack-sysctl.html - naturally, research anything you think you might want to change before you do.