sugar_in_your_tea

joined 2 years ago

Use something like Backblaze or Hetzner storage boxes for off-site backups. There are a number of tools for making this painless, so pick your favorite. If you have the means, I recommend doing a disaster recovery scenario every so often (i.e. disconnect existing drives, reinstall the OS, and load everything from remote backup).

Generally speaking, follow the 3-2-1 rule:

  • 3 copies of everything on
  • 2 different types of media with
  • 1 copy off site (at least)

For your situation, this could be:

  • 3 copies - your computer (NVMe?), TrueNas (HDD?), off-site backup; ideally have a third local device (second computer?)
  • 2 media - NVMe and HDD
  • 1 copy off site - Backblaze, Hetzner, etc

You could rent a cloud server, but it'll be a lot more expensive vs just renting storage.

Exactly.

There's a difference between gatekeeping and being transparent about what's expected. I'm not suggesting people do it the hard way as some kind of hazing ritual, but because there's a lot of practical value to maintaining your system there. Arch is simple, and their definition of simple means the devs aren't going to do a ton for you outside of providing good documentation. If your system breaks, that's on you, and it's on you to fix it.

If reading through the docs isn't your first instinct when something goes wrong, you'll probably have a better experience with something else. There are plenty of other distros that will let you offload a large amount of that responsibility, and that's the right choice for most people because most people don't want to mess with their system, they want to use it.

Again, it's not gatekeeping. I'm happy to help anyone work through the install process. I won't do it for you, but I'll answer any questions you might have by showing you where in the docs it is.

If you have reasonable practices, git blame will show you the original ticket, a link to the code review, and relevant information about the change.

Then just do it in your greenhouse. If you don't have one, ask your help to build one.

Yes, Arch is really stable and has been for about 10 years. In fact, I started using Arch just before they became really stable (the /usr merge), and stuck with it for a few years after. It's a fantastic distro! If openSUSE Tumbleweed stopped working for me, I'd probably go back to Arch. I ran it on multiple systems, and my main reason for switching is I wanted something with a stable release cycle for servers and rolling on desktop so I can use the same tools on both.

It has fantastic documentation, true, but most likely a new user isn't going to go there, they'll go to a forum post from a year ago and change something important. The whole point of going through the Arch install process is to force you to get familiar with the documentation. It's really not that hard, and after the first install (which took a couple hours), the second took like 20 min. I learned far more in that initial install than I did in the 3-ish years I'd used other distros before trying Arch.

CachyOS being easy to setup defeats the whole purpose since users won't get familiar with the wiki. By all means, go install CachyOS immediately after the Arch install, buy so yourself a favor and go through it. You'll understand everything from the boot process to managing system services so much better.

I 100% agree. If you want the Arch experience, you should have the full Arch experience IMO, and that includes the installation process. I don't mean this in a gatekeepy way, I just mean that's the target audience and that's what the distro is expecting.

For a new user, I just cannot recommend Arch because, chances are, that's not what they actually want. Most new users want to customize stuff, and you can do that with pretty much every distro.

For new users, I recommend Debian, Mint, or Fedora. They're release based, which is what you want when starting out so stuff doesn't change on you, and they have vibrant communities. After using it for a year or two, you'll figure out what you don't like about the distro and can pick something else.

[–] sugar_in_your_tea@sh.itjust.works 3 points 2 days ago (6 children)

I disagree. If you want to use Arch for the first time, install it the Arch way. It's going to be hard, and that's the point. Arch will need manual intervention at some point, and you'll be expected to fix it.

If you use something like Manjaro or CachyOS, you'll look up commands online and maybe it'll work, but it might not. There's a decent chance you'll break something, and you'll get mad.

Arch expects you to take responsibility for your system, and going through the official install process shows you can do that. Once you get through that once, go ahead and use an installer or fork. You know where to find documentation when something inevitably breaks, so you're good to go.

If you're unwilling to do the Arch install process but still want a rolling release, consider OpenSUSE Tumbleweed. It's the trunk for several projects, some of them commercial, so you're getting a lot of professional eyeballs on it. There's a test suite any change needs to pass, and I've seen plenty of cases where they hold off on a change because a test fails. And when it does fail (and it probably will), you just snapper rollback and wait a few days. The community isn't as big as other distros, so I don't recommend it for a first distro, but they're also not nearly as impatient as Arch forums.

Arch is a great distro, I used it for a few years without any major issues, but I did need to intervene several times. I've been on Tumbleweed about as long and I've only had to snapper rollback a few times, and that was the extent of the intervention.

All of them? Maybe an international consortium that pays devs in their home currency.

Back when I used a HDD in my laptop, I was able to get my boot down to 20s or so. I don't understand what MS is doing...

[–] sugar_in_your_tea@sh.itjust.works 39 points 2 days ago (4 children)

You know what I want MS to do? Remove all the extra crap and just be a simple OS. The desktop should use 500MB or so of memory, boot should be a few seconds, and launching programs should be a few seconds. Don't do any weird caching nonsense, I don't need tens of GBs of OS nonsense, just give me a simple OS.

I have that w/ Linux. The only value Windows provides is app compatibility. Stop trying to be anything more than that.

[–] sugar_in_your_tea@sh.itjust.works 23 points 2 days ago (8 children)

Yup, and Linux probably boots faster. On my NVMe w/ full-disk encryption (not through the disk's microcontroller, through an outside FS), I boot to desktop in like 5 sec or less, and the desktop is fully usable. If I want to launch a program, I type the name and hit enter, and it launches in a couple seconds.

My M3 Mac is a little worse, since it gets confused about launching an app vs looking for a file, and it takes a bit longer to boot (20-30 seconds?).

But my SO's Windows machine is something else. It takes a minute or two to boot, and after that it takes a minute or two to "settle." I have no idea what it's doing, but I generally get up and get a drink or something when my SO asks me to get something pulled up. Why is it so crappy?

The basic service is free. There's an enterprise tier with more features, such as prioritizing IP ranges (e.g. geographical areas the company operates in).

 

Current setup:

  • one giant docker compose file
  • Caddy TLS trunking
  • only exposed port is Caddy

I've been trying out podman, and I got a new service running (seafile), and I did it via podman generate kube so I can run it w/ podman kube play. My understanding is that the "podman way" is to use quadlets, which means container, network, etc files managed by systemd, so I tried out podlet podman kube play to generate a systemd-compatible file, but it just spat out a .kube file.

Since I'm just starting out, it wouldn't be a ton of work to convert to separate unit files, or I can continue with the .kube file way. I'm just not sure which to do.

At the end of this process, here's what I'd like in the end:

  • Caddy is the only exposed port - could block w/ firewall, but it would be nice if they worked over a hidden network
  • each service works as its own unit, so I can reuse ports and whatnot - I may move services across devices eventually, and I'd rather not have to remember custom ports and instead use host names
  • automatically update images - shouldn't change the tag, just grab the latest from that tag

Is there a good reason to prefer .kube over .container et al or vice versa? Which is the "preferred" way to do this? Both are documented on the same "quadlet" doc page, which just describes the acceptable formats. I don't think I want kubernetes anytime soon, so the only reason I went that way is because it looked similar to compose.yml and I saw a guide for it, but I'm willing to put in some work to port from that if needed (and the docs for the kube yaml file kinda sucks). I just want a way to ship around a few files so moving a service to a new device is easy. I'll only really have like 3-4 devices (NAS, VPS, and maybe an RPi or two), and I currently only have one (NAS).

Also, is there a customary place to stick stuff like config files? I'm currently using my user's home directory, but that's not great long-term. I'll rarely need to touch these, so I guess I could stick them on my NAS mount (currently /srv/nas/) next to the data (/srv/nas//). But if there's a standard place to stick this, I'd prefer to do that.

Anyway, just looking for an opinionated workflow to follow here. I could keep going with the kube yaml file route, or I could switch to the .container route, I don't mind either way since I'm still early in the process. I'm currently thinking of porting to the .container method to try it out, but I don't know if that's the "right" way or if ".kube` with a yaml config is the "right" way.

 

Apparently US bandwidth was reduced to 1TB for their base plan, though they have 20TB for the same plan in Europe. I don't use much bandwidth right now, but I could need more in the future depending on how I do backups and whatnot.

So I'm shopping around in case I need to make a switch. Here's what I use it for:

  • VPN to get around CGNAT - so all traffic for my internal services goes through it
  • HAProxy - forwards traffic to my various services
  • small test servers - very low requirements, basically just STUN servers
  • low traffic blog

Hard requirements:

  • custom ISO, or at least openSUSE support
  • inexpensive - shooting for ~$5/month, I don't need much
  • decent bandwidth (bare minimum 50mbps, ideally 1gbps+), with high-ish caps - I won't use much data most of the time (handful of GB), but occasionally might use 2-5TB

Nice to have:

  • unmetered/generous bandwidth - would like to run a Tor relay
  • inexpensive storage - need to put my offsite backups somewhere
  • API - I'm a nerd and like automating things :)
  • location near me - I'm in the US, so anywhere in NA works

Not needed:

  • fast processors
  • lots of RAM
  • loose policies around torrenting and processing (no crypto or piracy here)
  • support features, recipes, etc - I can figure stuff out on my own

I'll probably stick with Hetzner for now because:

  • pricing is still fair (transfer is in line with competitors)
  • can probably move my server to Germany w/o major issues for more bandwidth
  • they hit all of the other requirements, nice to haves, and many unneeded features

Anyway, thoughts? The bandwidth change pisses me off, so let me know if there's a better alternative.

view more: next ›