this post was submitted on 03 Aug 2025
31 points (91.9% liked)

Selfhosted

50406 readers
397 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I tried to find a more applicable community to post this to but didn't find anything.

I recently set up a NAS/server on a Raspberry Pi 5 running Raspberry Pi OS (see my last post) and since then I've got everything installed into a 3D printed enclosure and I've got RAID set up (ZFS RAIDz1). Prior to setting up RAID, I could transfer files to/from the NAS at around 200MB/s, but now that RAID is seemingly working things are transferring at around 28-30 MB/s. I did a couple searches and found someone suggesting to disable sync ($ sudo zfs set sync=disabled zfspool). I tried that and it doesn't seem to have had any effect. Any suggestions are welcome but keep in mind that I barely know what I'm doing.

Edit: When I look at the SATA hat, the LEDs indicate that the drives are being written to for less than half a second and then there's a break of about 4 seconds where there's no writing going on.

you are viewing a single comment's thread
view the rest of the comments
[–] 3dcadmin@lemmy.relayeasy.com 2 points 1 week ago

this is the limits of a slow interface and 5 drives. See my other reply to enable faster pci speeds. because of how zfs works 5 drives is slower than 3, takes more cache and write speeds especially will be slower, quite a lot slower. with 5 drives and 16gb you can easily have a zfs cache of 12 gigs to help it along, i guess this is why you are getting large gaps between writes. as someone else said a pi doesn't do well in this case but I reckon you can improve it. however as also said it is never going to be a speedy solution. secure and safe for data but not fast