thecoffeehobbit

joined 10 months ago
[–] thecoffeehobbit@sopuli.xyz 5 points 4 months ago

Huh.

There's a time and place for a DIY solution and academia can well be like that sometimes.

The latest Mac Mini can't run Linux though. It's M4 and asahi doesn't even support M3 chips yet. But if you actually got the previous model with M1/M2 you can do Linux if desired. I might not attempt, and just use the Mac as a server as-is. It's not too different from Linux. Asking the duck for "how to xx on Mac" when you already know the Linux equivalents should make your life tolerable.

[–] thecoffeehobbit@sopuli.xyz 8 points 4 months ago (1 children)

Get base Debian, you'll have more options for desktop environment. Once you get past the installation hassle it should just work for the rest of times. MX has its place but it's specifically made to have no systemd which may not be something a new user is looking for. It feels very opinionated, is what I'm trying to say. May be your thing of course, but I'd recommend reading more on its philosophy before picking.

8 years is probably not old enough to require lighter desktops if the machines were at least mid range at the time. You should be able to use gnome or KDE as you please. Nothing against XFCE in principle, but it can be a little clunky especially for a laptop. No touch gestures, for example.

[–] thecoffeehobbit@sopuli.xyz 12 points 6 months ago

Yeah we're gonna use like 1 tanker for that, 2 on a busy day. The 28 others are going somewhere else

[–] thecoffeehobbit@sopuli.xyz 2 points 7 months ago

I have an external storage unit a couple kilometers away and two 8TB hard drives with luks+btrfs. One of them is always in the box and after taking backups, when I feel like it, I detach the drive and bike to the box to switch. I'm currently researching btrbk for updating the backup drive on my pc automatically, it's pretty manual atm. For most scenarios the automatic btrfs snapshots on my main disks are going to be enough anyway.

[–] thecoffeehobbit@sopuli.xyz 1 points 8 months ago* (last edited 8 months ago)

Oh yeah and I did enable Proxmox VM firewall for the TrueNAS, the NFS traffic goes via an internal interface. Wasn't entirely convinced by NFS's security posture when reading about it.. At least restrict it to the physical machine 0_0 So I now need to intentionally pass a new NIC to any VM that will access the data, which is neat.

[–] thecoffeehobbit@sopuli.xyz 1 points 8 months ago (1 children)

A wrap-up of what I ended up doing:

  • Replaced the bare metal Ubuntu with Proxmox. Cool cool. It can do the same stuff but easier / comes with a lot of hints for best practices. Guess I'm a datacenter admin now
  • Wiped the 2x960GB SSD pool and re-created it with ZFS native encryption
  • Made a TrueNAS Scale VM, passed through the SSD pool disks, shared the datasets with NFS and made snapshot policies
  • Mounted the NFS on the Ubuntu VM running my data related services and moved the docker bind mounts to that folder
  • Bought a 1Gbps Intel network card to use instead of the onboard Realtek and maxed out the host memory to 16GB for good measure

I have achieved:

  • 15min RPO for my data (as it sits on the NFS mount, which is auto-snapshotted in TrueNAS)
  • Encryption at rest (ZFS native)

I have not achieved (yet..):

  • Key fetch on boot. Now if the host machine boots I have to log in to TrueNAS to key in the ZFS passphrase. I will have to make some custom script for this anyway I guess to make it adapt to the situation as key fetching on boot is a paid feature in TrueNAS but it just makes managing the storage a bit easier so I wanna use it now. Disabled auto start on boot for the services VM that depends on the NFS share, so I'll just go kick it up manually after unlocking the pool in TrueNAS.

Quite happy with the setup so far. Looking to automate actual backups next, but this is starting to take shape. Building the confidence to use this for my actual phone backups, among other things.

[–] thecoffeehobbit@sopuli.xyz 2 points 8 months ago

Really good to know. Planned to keep using very mainstream LTS versions anyway, but this solidifies the decision. Maybe on a laptop I'll install something more experimental but that's then throwaway style.

[–] thecoffeehobbit@sopuli.xyz 2 points 8 months ago

I guess I'll give it a spin. There seems to be a big community around it. I initially thought I might migrate later so keeping the host OS layer as thin as possible. Ubuntu was mainly an easy start as I was familiar with it from before and the spirit in this initiative is DIY over framework - but if there's a widely used solution for exactly this.. Yeah.

[–] thecoffeehobbit@sopuli.xyz 1 points 8 months ago

Always a good reminder to test the backups, no I would not sleep properly if I didn't test them :p

Aiming to keep it simple, too many moving parts in the VM snapshots / hard to figure out best practices and notice mistakes without work experience in the area, so I'll just backup the data separately and call it a day. But thanks for the input! I don't think any of my services have in-memory db's.

[–] thecoffeehobbit@sopuli.xyz 2 points 8 months ago (2 children)

Right, thanks for the heads up! On the desktops I have simply installed zfs as root via the Ubuntu 24.04 installer. Then, as the option was not available in the server variant I started to think maybe that is not something that should be done :p

[–] thecoffeehobbit@sopuli.xyz 2 points 8 months ago (4 children)

Aight thank you so much, confirms I'm on the right path! This clarifies a lot, I'll keep the ext4 boot drive :)

[–] thecoffeehobbit@sopuli.xyz 1 points 8 months ago (2 children)

Right, so my aversion to live backups comes initially from Louis Rossmann's guide on the FUTO wiki where he mentions it's non trivial to reliably snapshot a running system. After a lot of looking elsewhere as well I haven't gotten much hints that it would be bad advice and I want to err on the side of caution anyway. The hypervisor is QEMU/KVM so in theory it should be able to do live snapshots afaik. But I'm not familiar enough with the consistency guarantees to fully trust it. I don't wanna wake up one day to a server crash and trying to mount the backed up qcow2 in a new system and suddenly it wouldn't work and I just lost data.

It won't matter though as I'll just place all the important data on the zpool and back that up frequently as a simple data store. The VMs can keep doing their nightly shutdown and snapshot thing.

view more: ‹ prev next ›