sugar_in_your_tea

joined 2 years ago
[–] sugar_in_your_tea@sh.itjust.works 29 points 9 hours ago (1 children)

As a manager of sorts, I already know if people are doing their job: the work gets done. I'm involved enough to know how much time their work should take and see through their BS since I do similar work.

Maybe these bosses should do the same: do similar work and you'll know when they're BS-ing you.

I see you ignored my entire comment.

No, I responded to the relevant part. I was using segfault as a metaphor, not arguing that it's actually the same mechanism underneath. If you're getting panics in production code, I consider that just as much of an emergency to fix as a segfault, and Rust helpfully gives you stack trace info with it. It's not the same idea as an exception, which could signify an unrecoverable error or an expected issue that can be recovered from.

I don’t know what is more explicit about expect

It forces you to write a message, so most temporary uses will be unwrap(). I use unwrap() all the time when prototyping for the happy path, and then do proper error handling later. This is especially true in larger projects where I can't just throw in anyhow or something and actually need to map error types and whatnot. I don't use expect() much (current hobby project has 4 uses, 3 for startup issues and 1 for hopefully impossible condition) but I think it makes sense when there's no way to continue.

But yes, unwrap() is perhaps the first thing I look for as a reviewer, which is why it's so surprising that this is the issue. At the very least, it should have been something like expect("exceeds max file size"). I personally prefer explicit panics in production code, but expect is close enough that it's personal preference.

[–] sugar_in_your_tea@sh.itjust.works 1 points 1 day ago (2 children)

Yes, it's not the same since you get a stacktrace (if enabled) and a message, but it's the closest thing you get in safe rust (outside compiler bugs). I compare it to a segfault because it's almost as unhandleble.

Basically, you don't want a panic to crash your program in most cases. If you do, make it explicit (i.e. with expect()). unwrap() tells me the value is absolutely there or the dev is lazy, and I always assume the latter unless there's an explanation (or it's obvious from context) otherwise.

Nah, if there's one thing they thoroughly test, it's the spying.

[–] sugar_in_your_tea@sh.itjust.works 2 points 2 days ago* (last edited 2 days ago) (4 children)

No, it's a panic, so it's more similar to a segfault, but with some amount of unwinding. It can be "caught" but only at a thread boundary.

It is unwrap's fault. If they did it properly, they would've had to explicitly deal with the problem, which could clarify exactly what the problem is. In this case, I'd probably use expect() to add context. Also, when doing anything with strict size requirements, I would also explicitly check the size to make sure it'll fit, again, for better error reporting.

Proper error reporting could've made this a 5-min investigation.

Also, the problem in the first place should've been caught with unit tests and a test deploy. Our process here is:

  1. Any significant change to queries is tested with a copy of production data
  2. All changes are tested in a staging environment similar to production
  3. All hotfixes are tested with a copy of production data

And we're not a massive software shop, we have a few dozen devs in a company of thousands of people. If I worked at Cloudflare, I'd have more rigorous standards given the global impact of a bug (we have a few hundred users, not billions like Cloudflare).

Ift is precious and beyond compare. It has tools that most other languages lack to prove certain classes of bugs are impossible.

You can still introduce bugs, especially when you use certain features that "standard" linter (clippy) catches by default and no team would silence globally. .unwrap() is very controversial in Rust and should never be used without clear justification in production code. Even in my pet projects, it's the first thing I clear out once basic functionality is there.

This issue should've been caught at three separate stages:

  1. git pre-commit or pre-push should run the linter on the devs machine
  2. Static analysis checks should catch this both before getting reviews and when deploying the change
  3. Human code review

The fact that it made it past all three makes me very concerned about how they do development over there. We're a much smaller company and we're not even a software company (software dev is <1% of the total company), and we do this. We don't even use Rust, we're a Python shop, yet we have robust static analysis for every change. It's standard, and any company doing anything more than a small in-house tool used by 3 people should have these standards in place.

[–] sugar_in_your_tea@sh.itjust.works 2 points 2 days ago (1 children)

Use something like Backblaze or Hetzner storage boxes for off-site backups. There are a number of tools for making this painless, so pick your favorite. If you have the means, I recommend doing a disaster recovery scenario every so often (i.e. disconnect existing drives, reinstall the OS, and load everything from remote backup).

Generally speaking, follow the 3-2-1 rule:

  • 3 copies of everything on
  • 2 different types of media with
  • 1 copy off site (at least)

For your situation, this could be:

  • 3 copies - your computer (NVMe?), TrueNas (HDD?), off-site backup; ideally have a third local device (second computer?)
  • 2 media - NVMe and HDD
  • 1 copy off site - Backblaze, Hetzner, etc

You could rent a cloud server, but it'll be a lot more expensive vs just renting storage.

Exactly.

There's a difference between gatekeeping and being transparent about what's expected. I'm not suggesting people do it the hard way as some kind of hazing ritual, but because there's a lot of practical value to maintaining your system there. Arch is simple, and their definition of simple means the devs aren't going to do a ton for you outside of providing good documentation. If your system breaks, that's on you, and it's on you to fix it.

If reading through the docs isn't your first instinct when something goes wrong, you'll probably have a better experience with something else. There are plenty of other distros that will let you offload a large amount of that responsibility, and that's the right choice for most people because most people don't want to mess with their system, they want to use it.

Again, it's not gatekeeping. I'm happy to help anyone work through the install process. I won't do it for you, but I'll answer any questions you might have by showing you where in the docs it is.

If you have reasonable practices, git blame will show you the original ticket, a link to the code review, and relevant information about the change.

Then just do it in your greenhouse. If you don't have one, ask your help to build one.

Yes, Arch is really stable and has been for about 10 years. In fact, I started using Arch just before they became really stable (the /usr merge), and stuck with it for a few years after. It's a fantastic distro! If openSUSE Tumbleweed stopped working for me, I'd probably go back to Arch. I ran it on multiple systems, and my main reason for switching is I wanted something with a stable release cycle for servers and rolling on desktop so I can use the same tools on both.

It has fantastic documentation, true, but most likely a new user isn't going to go there, they'll go to a forum post from a year ago and change something important. The whole point of going through the Arch install process is to force you to get familiar with the documentation. It's really not that hard, and after the first install (which took a couple hours), the second took like 20 min. I learned far more in that initial install than I did in the 3-ish years I'd used other distros before trying Arch.

CachyOS being easy to setup defeats the whole purpose since users won't get familiar with the wiki. By all means, go install CachyOS immediately after the Arch install, buy so yourself a favor and go through it. You'll understand everything from the boot process to managing system services so much better.

 

Current setup:

  • one giant docker compose file
  • Caddy TLS trunking
  • only exposed port is Caddy

I've been trying out podman, and I got a new service running (seafile), and I did it via podman generate kube so I can run it w/ podman kube play. My understanding is that the "podman way" is to use quadlets, which means container, network, etc files managed by systemd, so I tried out podlet podman kube play to generate a systemd-compatible file, but it just spat out a .kube file.

Since I'm just starting out, it wouldn't be a ton of work to convert to separate unit files, or I can continue with the .kube file way. I'm just not sure which to do.

At the end of this process, here's what I'd like in the end:

  • Caddy is the only exposed port - could block w/ firewall, but it would be nice if they worked over a hidden network
  • each service works as its own unit, so I can reuse ports and whatnot - I may move services across devices eventually, and I'd rather not have to remember custom ports and instead use host names
  • automatically update images - shouldn't change the tag, just grab the latest from that tag

Is there a good reason to prefer .kube over .container et al or vice versa? Which is the "preferred" way to do this? Both are documented on the same "quadlet" doc page, which just describes the acceptable formats. I don't think I want kubernetes anytime soon, so the only reason I went that way is because it looked similar to compose.yml and I saw a guide for it, but I'm willing to put in some work to port from that if needed (and the docs for the kube yaml file kinda sucks). I just want a way to ship around a few files so moving a service to a new device is easy. I'll only really have like 3-4 devices (NAS, VPS, and maybe an RPi or two), and I currently only have one (NAS).

Also, is there a customary place to stick stuff like config files? I'm currently using my user's home directory, but that's not great long-term. I'll rarely need to touch these, so I guess I could stick them on my NAS mount (currently /srv/nas/) next to the data (/srv/nas//). But if there's a standard place to stick this, I'd prefer to do that.

Anyway, just looking for an opinionated workflow to follow here. I could keep going with the kube yaml file route, or I could switch to the .container route, I don't mind either way since I'm still early in the process. I'm currently thinking of porting to the .container method to try it out, but I don't know if that's the "right" way or if ".kube` with a yaml config is the "right" way.

 

Apparently US bandwidth was reduced to 1TB for their base plan, though they have 20TB for the same plan in Europe. I don't use much bandwidth right now, but I could need more in the future depending on how I do backups and whatnot.

So I'm shopping around in case I need to make a switch. Here's what I use it for:

  • VPN to get around CGNAT - so all traffic for my internal services goes through it
  • HAProxy - forwards traffic to my various services
  • small test servers - very low requirements, basically just STUN servers
  • low traffic blog

Hard requirements:

  • custom ISO, or at least openSUSE support
  • inexpensive - shooting for ~$5/month, I don't need much
  • decent bandwidth (bare minimum 50mbps, ideally 1gbps+), with high-ish caps - I won't use much data most of the time (handful of GB), but occasionally might use 2-5TB

Nice to have:

  • unmetered/generous bandwidth - would like to run a Tor relay
  • inexpensive storage - need to put my offsite backups somewhere
  • API - I'm a nerd and like automating things :)
  • location near me - I'm in the US, so anywhere in NA works

Not needed:

  • fast processors
  • lots of RAM
  • loose policies around torrenting and processing (no crypto or piracy here)
  • support features, recipes, etc - I can figure stuff out on my own

I'll probably stick with Hetzner for now because:

  • pricing is still fair (transfer is in line with competitors)
  • can probably move my server to Germany w/o major issues for more bandwidth
  • they hit all of the other requirements, nice to haves, and many unneeded features

Anyway, thoughts? The bandwidth change pisses me off, so let me know if there's a better alternative.

view more: next ›