dont

joined 2 years ago
[–] dont@lemmy.world 1 points 1 week ago

The annoyance grows with the number of hosts ;-) I still want to feel in control, which is why I'm hesitant to implement unattended decryption like with tang/clevis.

But I'm interested in the idea of not messing with the initrd-image, boot into a running system and then wait for decryption of a data-partition. Isn't it a hassle to manually override all the relevant service declarations etc. to wait for the mount? Or how do you do that?

[–] dont@lemmy.world 2 points 1 week ago

The passphrase should be stored and transferred encrypted, but that would basically mean reimplementing mandos, a tool that was mentioned in another reply https://lemmy.world/post/38400013/20341900. Besides that yes, that's one way I've also considered. An ansible script with access to all encrypted host's initrd-ssh-keys that tries to login; if the host is waiting for decryption, provides the key, done. Needs one webhook for notification and one for me to trigger the playbook run... Maybe I will revisit this...

[–] dont@lemmy.world 1 points 1 week ago (1 children)

It wasn't clear to me at first glance how the mandos server gets the approval to supply the client with its desired key, but I figured it out in the meantime: that's done through the mandos-monitor tui. However, that doesn't quite fit my ux-expectations. Thanks for mentioning it, though. It's an interesting project I will keep in mind.

[–] dont@lemmy.world 1 points 1 week ago

Definitely! I have bmc/kvm everywhere (well, everywhere that matters).

I have talked myself out of this (for now), though. I think if I ever find the time to revisit this, I will try to to it by injecting some oidc-based approval (memo to myself: ciba flow?) into something like clevis/tang.

[–] dont@lemmy.world 1 points 1 week ago

Sort of, but this seems a bit heavy. (That being said, I was also considering pkcs#11 on a net-hsm, which seems to do basically the same...)

[–] dont@lemmy.world 1 points 1 week ago (2 children)

Yes, I was thinking about storing encrypted keys, but still, using claims is clearly just wrong... Using a vault to store the key is probably the way to go, even though it adds another service the setup depends on.

[–] dont@lemmy.world 0 points 1 week ago (3 children)

Interesting, do you happen to know how this "approval" works here, concretely?

[–] dont@lemmy.world 2 points 4 weeks ago

How long did it take to get zpool-attach? I will not join the waiting list 😉

[–] dont@lemmy.world 12 points 4 weeks ago (4 children)

The selling point of unraid is that you can mix and match different disk sizes and it figures out a (good, efficient?) way to handle them even as you grow a pool. You're not going to have a good time with a 1TB drive, a 2 TB drive and a 15 TB drive using zfs, unraid doesn't care... (Using and preferring zfs myself, by the way; this is heresay.)

[–] dont@lemmy.world 5 points 6 months ago* (last edited 6 months ago) (1 children)

I love the simplicity of this, I really do, but I don't consider this SSO. It may be if you're a single user, but even then, many things I'm hosting have their own authentication layer and allow offloading only to some oidc-/oauth or ldap-provider.

[–] dont@lemmy.world 3 points 7 months ago

Deployment of NC on kubernetes/docker (and maintenance thereof) is super scary. They copy config files around in dockerfile, e.g., it's a hell of a mess. (And not just docker: I have one instance running on an old-fashioned webhosting with only ftp access and I have to manually edit .ini and apache config after each update since they're being overwritten.) As the documentation of OCIS is growing and it gets more features, I might actually change even the larger instances, but for now I must consider it as not feature complete (since people have expectations from nextcloud that aren't met by ocis and its extensions). Moreover, I have more trust in the long term openness of nextcloud as opposed to owncloud, for historical reasons.

 

I'm afraid this is going to attract the "why use podman when docker exists"-folks, so let me put this under the supposition that you're already sold on (considering) using podman for whatever reason. (For me, it has been the existence of pods, to be used in situations where pods make sense, but in a non-redundant, single-node setup.)

Now, I was trying to understand the purpose of quadlets and, frankly, I don't get it. It seems to me that as soon as I want a pod with more than one container, what I'll be writing is effectively a kubernetes configuration plus some systemd unit-like file, whereas with podman compose I just have the (arguably) simpler compose file and a systemd file (which works for all pod setups).

I would get that it's sort of simpler, more streamlined and possibly more stable using quadlets to let systemd manage single containers instead of putting podman run commands in systemd service files. Is that all there is to it, or do people utilise quadlets as a kind of lightweight almost-kubernetes distro which leverages systemd in a supposedly reasonable way? (Why would you want to do that if lightweight, fully compliant kubernetes distros are a thing, nowadays?)

Am I missing or misunderstanding something?

view more: next ›