a_fancy_kiwi

joined 2 years ago
[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago

TIL. Thanks for the information

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago

I’m currently not in a situation where swap is being used so I think my system is doing fine right now. I’m not against swap, I get it’s better to have it than not but my intention was to figure out how close is my system getting to using swap. If it went from not using swap at all to using it constantly, I’d probably want to upgrade my ram, right? If nothing else just to avoid system slow downs and unneeded wear on my SSD

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago (2 children)

From what I can tell, my system isn’t currently using swap at all but it does have 8GB of available swap if needed.

To make sure I’m following what you are saying, if I upgraded my system to 64GB and changed nothing else, and let’s assume ZFS didn’t trying caching more stuff, would there still be a potential for my system to use swap just because the system wanted to even if it wasn’t memory constrained?

[–] a_fancy_kiwi@lemmy.world 4 points 1 week ago (1 children)

Came across some more info that you might find interesting. If true, htop is ignoring the cache used by ZFS but accounting for everything else.

link

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Assuming the info in this link is correct, ZFS is using ~20GB for caching which makes htop's ~8GB of in use memory make sense when compared with the results from cat /proc/meminfo. This is great news.

My results after running cat /proc/spl/kstat/zfs/arcstats:

c                               4    19268150979
c_min                           4    1026222848
c_max                           4    31765389312
size                            4    19251112856
[–] a_fancy_kiwi@lemmy.world 3 points 1 week ago (3 children)

Thank you for the detailed explanation

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (6 children)

You're an angel. ~~I don't know what the fuck htop is doing showing 8GB in use~~ ~~Based on another user comment in this thread, htop is showing a misleading number~~. For anyone else who comes across this, this is what I have. ~~This makes the situation seem a little more grim.~~ I have ~2GB free, ~28GB in use ~~, and of that ~28GB only ~3GB is cache that can be closed~~. For reference, I'm using ZFS and roughly 27 docker containers. ~~It doesn't seem like there is much room for future services to selfhost.~~

MemTotal:           30.5838      GB
MemFree:            1.85291      GB
MemAvailable:       4.63831      GB
Buffers:            0.00760269   GB
Cached:             3.05407      GB
[–] a_fancy_kiwi@lemmy.world 2 points 1 week ago (9 children)

That's pretty much where I'm at on this. As far as I'm concerned, if my system touches SWAP at all, it's run out of memory. At this point, I'm hoping to figure out what percent of the memory in use is unimportant cache that can be closed vs important files that process need to function.

[–] a_fancy_kiwi@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (6 children)

Is there a good way to tell what percent of RAM in use is used by less important caching of files that could be closed without any adverse effects vs files that if closed, the whole app stops functioning?

Basically, I'm hoping htop isn't broken and is reporting I have 8GB of important showstopping files open and everything else is cache that is unimportant/closable without the need to touch SWAP.

[–] a_fancy_kiwi@lemmy.world 3 points 1 week ago (8 children)

This is why I'd like to know what tool shows the most useful number. If I only have 4GB out of 30GB left, is that 26GB difference mostly important processes or mostly closable cache? Like, is htop borked and not showing me useful info or is it saying 8GB of the 26GB used is important showstopping stuff?

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (2 children)

Fuck. This is a bad time to be running low on memory

 

I recently noticed that htop displays a much lower 'memory in use' number than free -h, top, or fastfetch on my Ubuntu 25.04 server.

I am using ZFS on this server and I've read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn't show caching used by the kernel but I'm not sure how to confirm ZFS is what's causing the discrepancy.

I'm also running a bunch of docker containers and am concerned about stability since I don't know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I'm using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?

Server Memory Usage:

  • htop = 8.35G / 30.6G
  • free -h =
               total        used        free      shared  buff/cache   available
Mem:            30Gi        26Gi       1.3Gi       730Mi       4.2Gi       4.0Gi
  • top = MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache
  • fastfetch = 26.54GiB / 30.6GiB

EDIT:

Answer

My Results

tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.

[–] a_fancy_kiwi@lemmy.world 2 points 1 week ago (1 children)

I didn’t realize it was that old. Whoever is maintaining it is doing a good job making it look modern

 

This is a continuation of my other post

I now have homeassistant, immich, and authentik docker containers exposed to the open internet. Homeassistant has built in 2FA and authentik is being used as the authentication for immich which supports 2FA. I went ahead and blocked connections from every country except for my own via cloudlfare (I'm aware this does almost nothing but I feel better about it).

At the moment, if my machine became compromised, I wouldn't know. How do I monitor these docker containers? What's a good way to block IPs based on failed login attempts? Is there a tool that could alert me if my machine was compromised? Any recommendations?

EDIT: Oh, and if you have any recommendations for settings I should change in the cloudflare dashboard, that would be great too; there's a ton of options in there and a lot of them are defaulted to "off"

 

tldr: I'd like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I'm not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I'm kind of unsure what the best approach is. Hosting services on the internet has risk and I'd like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What's the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

EDIT:

If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.

  • domain from namecheap
  • cloudflare to handle DNS
  • Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn't get around to looking at Caddy)
  • Cloudflare-ddns docker container to update my A records in cloudflare
  • authentik for 2 factor authentication on my immich server
 

I've been interested in building a DIY NAS out of an SBC for a while now. Not as my main NAS but as a backup I can store offsite at a friend or relative's house. I know any old x86 box will probably do better, this project is just for the fun of it.

The Orange Pi 5 looks pretty decent with its RK3588 chip and M.2 PCIe 3.0 x4 connector. I've seen some adapters that can turn that M.2 slot into a few SATA ports or even a full x16 slot which might let me use an HBA.

Anyway, my question is, assuming the CPU isn't a bottle neck, how do I figure out what kind of throughput this setup could theoretically give me?

After a few google searches:

  • PCIe Gen 3 x4 should give me 4 GB/s throughput
  • that M.2 to SATA adapter claims 6 ~~GB/s~~ Gb/s throughput
  • a single 7200rpm hard drive should give about 80-160MB/s throughput

My guess is that ultimately, I'm limited by that 4GB/s throughput on the PCIe Gen 3 x4 slot but since I'm using hard drives, I'd never get close to saturating that bandwidth. Even if I was using 4 hard drives in a RAID 0 config (which I wouldn't do), I still wouldn't come close. Am I understanding that correctly; is it really that simple?

view more: next ›