lemmy.net.au

54 readers
3 users here now

This instance is hosted in Sydney, Australia and Maintained by Australian administrators.

Feel free to create and/or Join communities for any topics that interest you!

Rules are very simple

Mobile apps

https://join-lemmy.org/apps

What is Lemmy?

Lemmy is a selfhosted social link aggregation and discussion platform. It is completely free and open, and not controlled by any company. This means that there is no advertising, tracking, or secret algorithms. Content is organized into communities, so it is easy to subscribe to topics that you are interested in, and ignore others. Voting is used to bring the most interesting items to the top.

Think of it as an opensource alternative to reddit!

founded 1 year ago
ADMINS
13326
 
 

This is some World War 3 shit

13327
 
 

In a move clearly designed to strengthen its position among developers, OpenAI has acquired Python tool maker Astral. The house of Altman expects the deal to strengthen the ecosystem for its Codex programming agent.

Since its founding in 2022 by Charlie Marsh, Astral has won over a substantial portion of the Python community with Rust-based tools like uv (package and project manager), Ruff (linting and formatting), and ty (type checker) that outperform Python-based tools like pip.

13328
 
 

I understand that money needs to continually be printed as bills and coins are damaged or lost, but wouldn't any currency be way more stable if it was just printed slower than it's taken out of circulation?

13329
13330
 
 

To be clear, money wouldn't be involved in this. Since it would be WAY too easy to cheat.

13331
 
 

Double Check Your NFS timeouts to your NAS arent an NFS problem. They might be a dirty page writeback problem.

I'm really Sorry in advance for the wall of text here. I debated trimming this down but honestly the whole reason I spent months stuck on this is because nothing about it was obvious. The symptoms point you at NFS, your mount options, your network, everything except whats actually wrong. And because the defaults that cause it ship with basically every linux distro, Id bet money theres a ton of people out there with the same problem right now just blaming thier NAS or Jellyfin or whatever. For all I know this is common knowledge and I'm just the last person to figure it out, but on the off chance somebody else is out there googling the same NFS timeout errors I was, heres the full story. (TL;DR Below)

Ive been chasing NFS issues on my Proxmox cluster for months now and I finally found the actual cause, and it wasnt anything Id seen anyone talk about online. Figured Id write it up because I guarantee other people are hitting this exact same wall.

The setup: half a dozen VMs on Proxmox, all mounting a Synology NAS over NFS. Jellyfin, Audiobookshelf, Sonarr, Radarr, the usual self-hosted media stack. Things would work fine for a while and then randomly go sideways. Jellyfin stops mid-playback. Audiobookshelf loses track of where you were. Sonarr tries to import a downloaded episode and the entire container locks up. dmesg fills with nfs: server 192.168.1.50 not responding, timed out and youre rebooting things again.

The part that kept me going in circles for so long is that it was never consistent. An audiobook would stream for hours without a hiccup, but then Sonarr would try to move a 4GB episode file and the whole mount would go down. I could ls the mount and browse around just fine even while Sonarr was hung. Small file operations worked. Large writes didnt. But not always, Sometimes a big import would go through without a problem, and Id convince myself whatever Id just changed in my mount options had fixed it.

I went through all the usual advice. Switched from NFSv4 to NFSv3, which I was especially convinced was the fix because the timing lined up with when Id been experimenting with v4. It wasnt. I toggled nolock, tuned rsize and wsize down from 128K to 32K, tried soft vs hard mounts, checked the Synologys HDD hibernation settings, disabled TCP offloading on the virtio NIC. Nothing actually fixed it. Every time I thought I had it, the next import that was over the threshold would fail and i would scream.

Then at one point I gave a couple of the VMs more RAM, thinking the media workloads could use the headroom. Everything got worse after that. Like, measurably worse. I didnt connect the two at the time.

What finally cracked it was running a dd test to write a 2GB file to the NFS mount and actually watching the numbers. With the 32K buffer mount options, the write reported 2.1 GB/s. On a gigabit link. Obviously that data is not going to the NAS. The kernel was eating the entire write into the VMs page cache, saying "yep, done!" and then trying to flush 2+ GB of dirty pages to the Synology all at once. The NAS gets hit with a wall of data it cant process fast enough, NFS RPC calls start timing out, and everything goes to hell.

The default value for vm.dirty_ratio is 20, meaning the kernel will let 20% of your RAM fill up with dirty pages before it forces a writeback. On my 13GB VM thats 2.6GB of buffered writes. So the kernel would happily sit there absorbing data into RAM, and then try to shove 2.6 gigs down a gigabit pipe to the NAS all at once. And when I "upgraded" VMs with more RAM, I was literally raising the ceiling on how big that buffer could get. Thats why things got worse. The inconsistency made sense too. A 700MB file might stay under the background flush threshold and trickle out fine. A 4GB season pack would blow past it and trigger the whole mess.

The fix

Two sysctl values:

sysctl -w vm.dirty_bytes=67108864
sysctl -w vm.dirty_background_bytes=33554432

This caps the dirty page buffer at 64MB and starts background writeback at 32MB. Instead of hoarding gigabytes and flushing all at once, the kernel now pushes data out to the NAS continuously in small batches. Make it persistent:

# For distros using /etc/sysctl.d/ (Debian 12+, Ubuntu, etc.)
echo -e 'vm.dirty_bytes=67108864\nvm.dirty_background_bytes=33554432' > /etc/sysctl.d/99-nfs-dirty-pages.conf
sysctl -p /etc/sysctl.d/99-nfs-dirty-pages.conf

# For distros using /etc/sysctl.conf
echo 'vm.dirty_bytes=67108864' >> /etc/sysctl.conf
echo 'vm.dirty_background_bytes=33554432' >> /etc/sysctl.conf

Before: 2GB dd writes at 101 MB/s, dies at the 2GB mark with NFS timeouts and I/O errors. After: same test, steady 11.4 MB/s start to finish, zero NFS timeouts, completes cleanly. OK oK Yeah, the throughput number is lower, but Ill take a transfer that actually finishes over one that crashes every time.

I applied this across all six of my VMs that mount the NAS and the whole fleet has been stable since. Theyd all been independently building up multi-gigabyte write backlogs and dumping them onto the Synology simultanously. I was basically DDoSing my own nas from six directions every time anything tried to write a big file.

Then I checked the Proxmox host itself. 128GB of RAM. Four NFS mounts to the same Synology, including the one Proxmox writes VM backups to. All hard mounts with default dirty ratio. Thats a 25GB dirty page ceiling on the hypervisor. Every scheduled backup was potentially building up a 25 gigabyte write buffer and then hosing the NAS with it in one shot. And because the mounts were hard, if the Synology choked during the flush, the hypervisor itself would hang, not just a VM. I dont even want to think about how many weird backup failures and unexplained freezes this was behind.

Since applying the fix Ive also noticed that Jellyfin library scans are completing reliably now. They used to hang constantly and Id just accepted that as normal Jellyfin-over-NFS jank. The scans were generating thumbnails and writing metadata, building up dirty pages, and triggering the same flush that would take down the mount mid-scan. Audiobookshelf was doing the same thing. It would scan libraries and randomly lose connection to the mounted paths. That one was harder to pin down because audiobook files and cover art are small enough that the writes wouldnt always push past the threshold on their own. But if another VM had already half-filled the NASs tolerance with its own flush, Audiobookshelf tipping it over would be enough. Same underlying bug in every case, and I spent months blaming three different applications for it.

If youre running a media stack on VMs with NFS mounts to a NAS and youve been tearing your hair out over random timeouts, check your vm.dirty_ratio and do the math against your RAM. Bet you its higher than you think.

TLDR; If your NFS mounts to a NAS randomly time out during large writes, your VMs are probably buffering gigabytes of dirty pages in RAM and then flushing them all at once, overwhelming the nas. Symptoms in my case were Jellyfin stopping mid-playback and hanging during library scans, Audiobookshelf losing connection to mounted paths and forgetting playback position, and Sonarr/Radarr locking up completely when trying to import episodes. Set vm.dirty_bytes=67108864 and vm.dirty_background_bytes=33554432 on every VM (and the hypervisor) to cap the buffer at 64MB and force continuous small writebacks instead.


Edit 1: @deadcade pointed out that 11.4 MB/s is suspiciously close to a 100 Mbps link ceiling and they were right. Checked the NAS LAN1 network status and it was negotiating at 100 Mbps... The NAS was plugged into my router which has gigabit ports but was apparently negotiating down due to what i must assume is an issue with the router.

SO the real solution: I went to Bestbuy and grabbed a $20 gigabit switch, plugged the NAS and Proxmox host into it directly, and the Synology came up at 1000 Mbps immediately. Same 2GB dd test now completes at 107 MB/s from the host and 115 MB/s from the VM, no timeouts, totally clean.

So if i actually understand wtf is going on here... it was actually two problems stacked on top of each other this entire time.

The 100 Mbps link was the speed ceiling between the router and the NAS. The dirty page defaults were what turned that speed limitation into a catastrophic failure. The kernel would buffer gigabytes of writes and then try to flush them through a 100 Mbps pipe where the NFS RPCs would time out long before the data finished arriving. The sysctl fix worked because it accidentally rate-limited the client to roughly what the 100 Mbps link could handle. Fixing the link speed solved the actual bottleneck.

THANKS for the insight deadcade!

Both fixes stay though. 64MB dirty page cap on a gigabit link still saturates the connection at 115 MB/s and there's no reason to let a 128GB Proxmox host build up a 25GB write buffer aimed at a consumer NAS. Also check your link speeds.

Edit 2: Thanks again to everyone who chimed in with your fantastic insights and ideas.

13332
 
 

Probably a silly question but the .uk domain is really cheap. If I'm not in the UK can I still use that domain for my server without issue?

Its like 50 bucks for a ten year lease

13333
13334
 
 

live-tucker-reaction

13335
13336
13337
13338
 
 

The UK government is trying to undermine the work of journalists who keep the public informed. Officials are now trying to claim they must tighten Freedom of Information (FOI) rules to defend against China. Meanwhile US president Donald Trump’s administration has publicly attacked independent media outlet Drop Site News for… telling the truth.

13339
 
 

The fine includes £450,000 for lack of age checks to prevent children from seeing pornography.

Archived version: https://archive.ph/kSmTG

13340
 
 

Been trying to setup Nord VPN on my steam deck

Unsupported config so they wouldnt give me a flatpak script

open to suggestions, even from non hosers

13341
13342
 
 

Automotive supplier Aumovio, formerly Continental, plans to withdraw from Lithuania and shut down its operations in the Kaunas Free Economic Zone by the end of 2028, according to reports citing Reuters and sources.

13343
13344
 
 

Who could have possibly predicted this, besides everyone?

Archived version: https://archive.is/20260319205859/https://www.404media.co/rip-metaverse-an-80-billion-dumpster-fire-nobody-wanted/

13345
 
 

Créer

13346
13347
13348
13349
 
 

Where are we headed wanna see me do some beanouts first

13350
view more: ‹ prev next ›