yote_zip

joined 2 years ago
[–] yote_zip@pawb.social 0 points 2 years ago* (last edited 2 years ago) (3 children)

Conduit is also licensed under Apache 2.0, so it could also be taken closed source at any point in time. The reason this wouldn’t impact Conduit as much is that there’re other contributors, whilst Synapse and Dendrite are almost exclusively developed by Element.

Right. The current perspective is based on the idea that if Synapse/Dendrite go closed-source right now, an open source version would be good as dead. Element is responsible for 95% of Synapse/Dendrite and I'm sure a community fork would have to play a lot of catch-up to figure out how to keep it going. If the community was more involved in Synapse/Dendrite implementation (and if Element let them) there would be less cause for alarm, as closing the source would just mean an immediate community fork and putting Element on ignore. Also to reiterate, The Matrix Foundation is not going along with Element on this move, and even if Element pulled something shady the Matrix Core Spec etc. would still remain open and under the Foundation's control, so the max we have to lose is Synapse/Dendrite and all of Element's developers.

As for the rest I agree and I do actually trust that Element is simply playing their only card here. These maneuvers are all required in order for Element to survive as a company at all, but they also unfortunately leave this backdoor open as a consequence. Matthew has pinky-promised over and over that they are only acting in good faith and that they would never use the backdoor, but it's understandable that the presence of the backdoor is putting everyone at unease. Best case scenario we take this as a warning sign that if Element drops dead tomorrow then Matrix is also dead. If people want Matrix to not be practically owned by Element then we should diversify and prepare escape plans.

[–] yote_zip@pawb.social 0 points 2 years ago (7 children)

This is actually quite a controversial change mainly because of their switch to a CLA. This indirectly gives them the opportunity to switch the license to closed source whenever they feel like it in the future. Semi-controversially, they are also primarly making this AGPL change in order to begin selling dual-licensing to companies. The Matrix Foundation itself does not support this change from Element, though Element is within its rights to do so.

You can read some more thoughts on this from the pessimistic folks at HackerNews. My main takeaway is that I don't trust Element because I don't trust anyone. I'm sure they're doing this in good faith but I don't like the power they have at the moment. I hope this is what's needed to begin focusing efforts on alternative homeserver implementations like Conduit.

[–] yote_zip@pawb.social 0 points 2 years ago (1 children)

IMO containerize everything. Containers save a lot of headaches, and time is valuable. You are correct that moving configurations is trivial with containers. Backing them up and restoring is also easy.

In the meantime you can install whatever you want in a VM - just keep track of the Docker configurations and move them when ready. I like Proxmox, but it may be overkill if you aren't going to have a complex setup. The main selling point would be that you 'containerize' your OS as well, which means you can snapshot it and do various other tricks with running multiple OS's. If your new server will eventually be a NAS, Proxmox can do other neat tricks like running TrueNAS/OpenMediaVault in a VM, or hosting a ZFS pool on Proxmox itself.

If you end up wanting to use Proxmox, you can also use Proxmox within a VM on your current machine to get comfortable in advance.

[–] yote_zip@pawb.social 0 points 2 years ago (2 children)

The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it's common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.

(TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won't work to unhook it from the host and give it to the guest.)

[–] yote_zip@pawb.social 0 points 2 years ago (5 children)

This is a fairly common setup and it's not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.

Usually:

  • Proxmox on bare metal

  • TrueNAS Core/Scale in a VM

  • Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there

  • If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I'll take their word for it)

  • If you run your app stack through LXCs, just set them up through Proxmox normally

  • Set up an NFS share through TrueNAS, and connect your app stack to that NFS share

  • (Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS

[–] yote_zip@pawb.social 0 points 2 years ago (6 children)

More hard drive slots? No problem! Extra vibrations are good for hard drives probably.