this post was submitted on 19 Jan 2026
17 points (94.7% liked)

Selfhosted

54680 readers
740 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I recently noticed that htop displays a much lower 'memory in use' number than free -h, top, or fastfetch on my Ubuntu 25.04 server.

I am using ZFS on this server and I've read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn't show caching used by the kernel but I'm not sure how to confirm ZFS is what's causing the discrepancy.

I'm also running a bunch of docker containers and am concerned about stability since I don't know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I'm using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?

Server Memory Usage:

  • htop = 8.35G / 30.6G
  • free -h =
               total        used        free      shared  buff/cache   available
Mem:            30Gi        26Gi       1.3Gi       730Mi       4.2Gi       4.0Gi
  • top = MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache
  • fastfetch = 26.54GiB / 30.6GiB

EDIT:

Answer

My Results

tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.

top 27 comments
sorted by: hot top controversial new old
[–] vk6flab@lemmy.radio 16 points 11 hours ago (2 children)

Linux aggressively caches things.

4 GB of RAM is not running out of memory.

If you start using swap, you're running into a situation where you might run out of memory.

If oomkiller starts killing processes, then you're running out of memory.

[–] tal@lemmy.today 3 points 10 hours ago (1 children)

If oomkiller starts killing processes, then you’re running out of memory.

Well, you could want to not dig into swap.

[–] a_fancy_kiwi@lemmy.world 1 points 10 hours ago (3 children)

That's pretty much where I'm at on this. As far as I'm concerned, if my system touches SWAP at all, it's run out of memory. At this point, I'm hoping to figure out what percent of the memory in use is unimportant cache that can be closed vs important files that process need to function.

[–] EarMaster@lemmy.world 5 points 6 hours ago

If that's the case you should look into your swappiness settings. You can set this to zero meaning the swap will only be used if you're actually out of memory, but as others have noted that is maybe not a healthy decision…

[–] eager_eagle@lemmy.world 6 points 6 hours ago

if my system touches SWAP at all, it's run out of memory

That's a swap myth. Swap is not an emergency memory, it's about creating a memory reclamation space on disk for anonymous pages (pages that are not file-backed) so that the OS can more efficiently use the main memory.

The swapping algorithm does take into account the higher cost of putting pages in swap. Touching swap may just mean that a lot of system files are being cached, but that's reclaimable space and it doesn't mean the system is running out of memory.

[–] victorz@lemmy.world 1 points 8 hours ago (1 children)

It's just that the system freezes for me when I used to run out of memory when I had only 32 GB of memory. Then I couldn't do anything and had to hard reset the computer with its reset button. Then it would be nice to have a little bit of swap to kill some stuff before literally everything just stops working.

[–] B0rax@feddit.org 1 points 4 hours ago (1 children)
[–] victorz@lemmy.world 1 points 4 hours ago

Yeah, on multiple computers. Linux I feel will just happily hand out memory on loan like a bank rather than from what's actually available. Then when it runs out, the next request for more memory will just freeze the system. ☠️

[–] a_fancy_kiwi@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago) (2 children)

Is there a good way to tell what percent of RAM in use is used by less important caching of files that could be closed without any adverse effects vs files that if closed, the whole app stops functioning?

Basically, I'm hoping htop isn't broken and is reporting I have 8GB of important showstopping files open and everything else is cache that is unimportant/closable without the need to touch SWAP.

[–] tal@lemmy.today 4 points 9 hours ago* (last edited 9 hours ago) (1 children)

https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable

Rik van Riel's comments when adding MemAvailable to /proc/meminfo:

/proc/meminfo: MemAvailable: provide estimated available memory

Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up "free" and "cached", which was fine ten years ago, but is pretty much guaranteed to be wrong today.

It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.

Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the "low" watermarks from /proc/zoneinfo.

However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.

It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.

Looking at the htop source:

https://github.com/htop-dev/htop/blob/main/MemoryMeter.c

   /* we actually want to show "used + shared + compressed" */
   double used = this->values[MEMORY_METER_USED];
   if (isPositive(this->values[MEMORY_METER_SHARED]))
      used += this->values[MEMORY_METER_SHARED];
   if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
      used += this->values[MEMORY_METER_COMPRESSED];

   written = Meter_humanUnit(buffer, used, size);

It's adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.

top, on the other hand, is using the kernel's MemAvailable directly.

https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c

	printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));

In short: You probably want to trust /proc/meminfo's MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.

[–] a_fancy_kiwi@lemmy.world 3 points 9 hours ago (1 children)

Thank you for the detailed explanation

[–] tal@lemmy.today 3 points 9 hours ago (1 children)

No problem. It was an interesting question that made me curious too.

[–] a_fancy_kiwi@lemmy.world 3 points 8 hours ago (1 children)

Came across some more info that you might find interesting. If true, htop is ignoring the cache used by ZFS but accounting for everything else.

link

[–] non_burglar@lemmy.world 1 points 43 minutes ago

Yes, ZFS cache has been contentious for exactly the reason you posted, but it is generally not a functional issue.

ZFS will release cache under memory pressure, however nice values of virtualizing can potentially demand it sooner than ZFS can release it.

There have been many changes to ZFS to improve this, but the legacy of "invisible cache" is still around.

[–] vk6flab@lemmy.radio 2 points 10 hours ago

This is the job for the OS.

You can run most Linux systems with stupid amounts of swap and the only thing you'll notice is that stuff starts slowing down.

In my experience, only in extremely rare cases are you smarter than the OS, and in 25+ years of using Linux daily I've seen it exactly once, where oomkiller killed running mysqld processes, which would have been fine if the developer had used transactions. Suffice to say, they did not.

I used a 1 minute cron job to reprioritize the process, problem "solved" .. for a system that hadn't been updated for 12 years but was still live while we documented what it was doing and what was required to upgrade it.

[–] Shimitar@downonthestreet.eu 10 points 11 hours ago (1 children)

You actually WANT to be with low free memory. Provided that most of it is used by cache.

Free memory is a waste, when you could cache stuff for faster access.

That's how Linux memory management works, and it make sense if you relflect on it. Better cache that page or that file that is used often, since free memory is just wasted. Cache can be freed and memory reclaimed in a fraction of a millisecond when needed.

So don't bother too much. Unless your SWAP usage is high, don't bother.

Also consider that Linux kernel will use your swap a bit even if you have lots of cache, because the kernel knows better than you how to improve your performances. Swapping out never used stuff is better than killing cached items.

Again, don't oberthink memory on Linux, the best alarm is when swap is constantly happening, then yes you need more ram (or to kill that broken process that keeps hogging due to a bug)

[–] a_fancy_kiwi@lemmy.world 3 points 10 hours ago (1 children)

This is why I'd like to know what tool shows the most useful number. If I only have 4GB out of 30GB left, is that 26GB difference mostly important processes or mostly closable cache? Like, is htop borked and not showing me useful info or is it saying 8GB of the 26GB used is important showstopping stuff?

[–] rtxn@lemmy.world 4 points 10 hours ago* (last edited 9 hours ago) (1 children)

The most useful is probably cat /proc/meminfo. The first couple of lines tell you everything you need to know.

  • MemTotal is the total useful memory.
  • MemFree is how much memory is not used by anything.
  • Cached is memory used by various caches, ~~e.g. ZFS~~. This memory can be reallocated.
  • MemAvailable is how much memory can be allocated, i.e. MemFree + Cached.
[–] a_fancy_kiwi@lemmy.world 1 points 9 hours ago* (last edited 8 hours ago) (2 children)

You're an angel. ~~I don't know what the fuck htop is doing showing 8GB in use~~ ~~Based on another user comment in this thread, htop is showing a misleading number~~. For anyone else who comes across this, this is what I have. ~~This makes the situation seem a little more grim.~~ I have ~2GB free, ~28GB in use ~~, and of that ~28GB only ~3GB is cache that can be closed~~. For reference, I'm using ZFS and roughly 27 docker containers. ~~It doesn't seem like there is much room for future services to selfhost.~~

MemTotal:           30.5838      GB
MemFree:            1.85291      GB
MemAvailable:       4.63831      GB
Buffers:            0.00760269   GB
Cached:             3.05407      GB
[–] Voroxpete@sh.itjust.works 1 points 4 hours ago

Most of those containers are probably grabbing more memory than they actually need. Consider applying some resource constraints to some of them.

Dozzle is an excellent addition to your docker setup, giving you live performance graphs for all your containers. It can help a lot with fine tuning your setup.

[–] rtxn@lemmy.world 3 points 9 hours ago (1 children)

You should also look at which processes use the largest amount of memory. ZFS is weird and might allocate its cache memory as "used" instead of "cached". See here to set its limits: https://forum.proxmox.com/threads/limit-zfs-memory.140803/

[–] a_fancy_kiwi@lemmy.world 1 points 9 hours ago* (last edited 8 hours ago)

Assuming the info in this link is correct, ZFS is using ~20GB for caching which makes htop's ~8GB of in use memory make sense when compared with the results from cat /proc/meminfo. This is great news.

My results after running cat /proc/spl/kstat/zfs/arcstats:

c                               4    19268150979
c_min                           4    1026222848
c_max                           4    31765389312
size                            4    19251112856
[–] TomAwezome@lemmy.world 2 points 10 hours ago

Yeah Linux likes to fill the cache entirely. If you want it to do that less and performance isn't a concern, set up something that drops the caches every few hours running this as root: echo 1 > /proc/sys/vm/drop_caches

[–] mhzawadi@lemmy.horwood.cloud 2 points 11 hours ago (1 children)

I've always used free -mh to check memory, so I would say you have 4gb available.

[–] a_fancy_kiwi@lemmy.world 1 points 11 hours ago* (last edited 11 hours ago) (1 children)

Fuck. This is a bad time to be running low on memory

[–] cmnybo@discuss.tchncs.de 3 points 9 hours ago (1 children)

Depending on the workload, compression may be an option. You can use zram or zswap to basically get more RAM at the expense of increased CPU usage.

[–] frongt@lemmy.zip 1 points 3 hours ago

On a modern CPU, the cost should be insignificant.