sxan

joined 2 years ago
[–] sxan@midwest.social 1 points 21 hours ago

Because they were wondering if it was a Caddy issue, and I'll bet real money it isn't.

Being able to exclude components from being a possible source of the issue is critical to problem solving.

[–] sxan@midwest.social 1 points 21 hours ago (1 children)

Hmmm. You're right; it's a mechanism I've never used because it's more work and it is slower, and I forget about it. All you need to do is be able to prove you own the domain, and control over the DNS record is certainly viable.

Is that what Porkbun does? Because Caddy can automate the http method, but not the DNS challenge method, because both require a handshake and that's updating the DNS record.

[–] sxan@midwest.social -5 points 2 days ago (5 children)

I've never heard of Porkbun, but it doesn't sound like a caddy issue. Let's Encrypt requires being able to resolve the DNS name you're requesting a cert for, and to be able to connect to your web service and fetch a secret to prove you own the domain. If porkbun does something like punch a hole in your LAN firewall and let in http traffic, then porkbun is the problem. Not Caddy.

[–] sxan@midwest.social 1 points 5 days ago

Shit, that's way more expensive. If only you knew someone in the US who would buy a few boxes and ship them to you...

But, seriously, yeah, that basically eliminates it as an option.

[–] sxan@midwest.social 1 points 5 days ago (2 children)

Just out of curiosity, is the product on Amazon, and is it that same price?

[–] sxan@midwest.social 1 points 5 days ago (4 children)

This is more expensive in your country?

https://a.co/d/9DiKeie

That's a little over $11 USD per 100 GB disk. Is it just more expensive where you live, or is it shipping?

I'd be really surprised if these weren't manufactured in Asia somewhere.

[–] sxan@midwest.social 2 points 6 days ago

It'd be more space efficient to store a COW2 of Linux with a minimum desktop and basically only DarkTable on it. The VM format hasn't changed in decades.

Shoot. A bootable disc containing Linux and the software you need to access the images, and on a separate track, a COW2 image of the same, and on a third, just DarkTable. Best case, you pop in the drive & run DarkTable. Or, you fire up a VM with the images. Worst case, boot into linux. This may be the way I go, although - again - the source images are the important part.

I’d be careful with using SSDs for long term, offline storage.

What I meant was, keep the master sidecar on SSD for regular use, and back it up occasionally to a RW disc. Probably with a simply cp -r to a directory with a date. This works for me because my sources don't change, except to add data, which is usually stored in date directories anyway.

You're also wanting to archive the exported files, and sometimes those change? Surely, this is much less data? Of you're like me, I'll shoot 128xB and end up using a tiny fraction of the shots. I'm not sure what I'd do for that - probably BD-RW. The longevity isn't great, but it's by definition mutable data, and in any case the most recent version can be easily enough regenerated as long as I have the sidecar and source image secured.

Burning the sidecar to disk is less about storage and more about backup, because that is mutable. I suppose an append backup snapshot to M-Disc periodically would be boots and suspenders, and frankly the sidecar data is so tiny I could probably append such snapshots to a single disc for years before it all gets used. Although... sidecar data would compress well. Probably simply tgz, then, since it's always existed, and always will, even if gzip has been superseded by better algorithms.

BTW, I just learned about the b3 hashing algorithm (about which I'm chagrined, because I thought I kept an eye out on the topic of compression and hashing). It's astonishingly fast - for the verification part, is what I'm suggesting.

[–] sxan@midwest.social 1 points 1 week ago (7 children)

The densities I'm seeing on M-Discs - 100GB, $5 per, a couple years ago - seemed acceptable to me. $50 for a TB? How big is your archive? Mine still fits in a 2TB disk.

Copying files directly would work, but my library is real big and that sounds tedious.

I mean, putting it in an archive isn't going to make it any smaller. Compression on even lossless compressed images doesn't often help.

And we're talking about 100GB discs. Is squeezing that last 10MB out of the disk by splitting an image across two disks worth it?

The metadata is a different matter. I'd have to think about how to handle the sidecar data... but that you could almost keep on a DVD-RW, because there's no way that's going to be anywhere near as large as the photos themselves. Is your photo editor DB bigger than 4GB?

I never change the originals. When I tag and edit, that information is kept separate from the source images - so I never have multiple versions of pictures, unless I export them for printing, or something, and those are ephemeral and can be re-exported by the editor with the original and the sidecar. Music, and photos, I always keep the originals isolated from the application.

This is good, though; it's helping me clarify how I want to archive this stuff. Right now mine is just backed up on multiple disks and once in B2, but I've been thinking about how to archive for long term storage.

I think in going to go the M-Disc route, with sidecar data on SSD and backed up to BluRay RW. The trick will be letting DarkTable know that the source images are on different media, but I'm pretty sure I saw an option for that. For sure, we're not the first people to approach this problem.

The whole static binary thing - I'm going that route with an encrypted share for financial and account info, in case I die, but that's another topic.

[–] sxan@midwest.social 2 points 1 week ago (9 children)

This is an interesting problem for the same use case which I've been thinking about lately.

Are you using standard BluRay, or M-Discs?

My plan was to simply copy files. These are photos, and IME they don't benefit from compression (I stopped taking raw format pictures when I switched to Fujifilm, and the jpgs coming from the camera were better than anything I could produce from raw in Darktable). Without compression, putting then in tarballs then only adds another level of indirection, and I can just checksum images directly after write, and access them directly when I need to. I was going to use the smallest M-Disc for an index and just copy and modify it when it changed, and version that.

I tend to not change photos after they've been processed through my workflow, so in my case I'm not as concerned with the "most recent version" of the image. In any case, the index would reflect which disc the latest version of an image lived, if something did change.

For the years I did shoot raw, I'm archiving those as DNG.

For the sensitive photos, I have a Rube Goldberg plan that will hopefully result in anyone with the passkey being able to mount that image. There aren't many of those, and that set hasn't been added to in years, so it'll go on one disc with the software necessary to mount it.

My main objective is accessibility after I'm gone, so having a few tools in the way makes trump over other concerns. I see no value in creating tarballs - attach the device, pop in the index (if necessary), find the disc with the file, pop that in, and view the image.

Key to this is

  • the data doesn't change over time
  • the data is already compressed in the file format, and does not benefit from extra compression
[–] sxan@midwest.social 4 points 1 week ago (1 children)

The problem is the design is Matrix itself. As soon as a single user joins a large room, the server clones all of the history it can.

I mean, there are basically two fundamental design options, here: either base the protocol over always querying the room host for data and cache as little as possible, or cache as much as possible and minimize network traffic. Matrix went for minimizing network traffic, and trying to circumvent that - while possible with cache tuning - is going to have adverse client behaviors.

XMPP had a lot of problems, too, though. Although I've been told some (all?) of these have been addressed, when I left the Jabberverse there was no history synchronization and support for multiple clients was poor - IIRC, messages got delivered to exactly one client. I lost my address book multiple times, encryption was poorly supported, and XMPP is such a chatty protocol, and wasteful of network bandwidth. V/VOIP support was terrible, it had a sparse feature set, in terms of editing history, reactions, and so on. Group chat support was poor. It was little better than SMS, as I remember.

It was better than a lot of other options when it was created, but it really was not very good; there are reasons why alternative chat clients were popular, and XMPP faded into the background.

[–] sxan@midwest.social 25 points 1 week ago (4 children)

A lot of memory, and a lot of disk space.

Synapse is the reference platform, and even if they don't, it feels as if the Matrix team make changes to Synapse and then update the spec later. This makes it hard for third-party servers (and clients!) to stay compliant, which is why they rise and fall. The spec management of Matrix is awful.

So, while suggestions may be to run something other than Synapse - which I sympathize with, because it's a PITA and expensive to run - if you go with something else just be prepared to always be trailing. Migrating server software is essentially impossible, too, so you'll be stuck with what you pick.

Matrix is one of the worst-managed best projects to come out in decades.

[–] sxan@midwest.social 12 points 1 week ago

Yeah, the Matrix protocol is poorly managed.

Synapse is the reference platform, and it's incredibly annoying.

 

What are you folks using for self-hosted single sign-on?

I have my little LDAP server (lldap is fan-fucking-tastic -- far easier to work with than OpenLDAP, which gave me nothing but heartburn). Some applications can be configured to work with it directly; several don't have LDAP account support. And, ultimately, it'd be nice to have SSO - having the same password everywhere if great, but having to sign in only once (per day or week, or whatever) would be even nicer.

There are several self-hosted Auth* projects; which is the simplest and easiest? I'd really just like a basic start-it-up, point it at my LDAP server, and go. Fine grained ACLs and RBAC support is nice and all, but simplicity is trump in my case. Configuring these systems is, IME, a complex process, with no small numbers of dials to turn.

A half dozen users, and probably only two groups: admin, and everyone else. I don't need fancy. OSS, of course. Is there any of these projects that fit that bill? It would seem to be a common use case for self-hosters, who don't need all the bells and whistles of enterprise-grade solutions.

 

Edit 2024-10-01

Another person posted about a similar need, and I decided to create a matrix document to track it, in the hope that those of us looking for this specific use case could come up with the best solution. The idea here is that, while many OSS social media projects are capable of being used like a Fcbook wall, they don't all necessarily provide an ideal user experience. Feature set is not equivalent to being designed for a specific use case, and the desired workflow should be the primary means of interacting with the service. The (for now) open document tracking this is here.

I'm a little surprised I can't find any posts asking this question, and that there doesn't seem to be a FAQ about it. Maybe "Facebook" covers too many use cases for one clean answer.

Up front, I think the answer for my case is going to be "Friendica," but I'm interested in hearing if there are any other, better options. I'm sure Mastodon and Lemmy aren't it, but there's Pixelfed and a dozen other options with which I'm less familiar with.

This mostly centers around my 3-y/o niece and a geographically distributed family, and the desire for Facebook-like image sharing with a timeline feed, comments, likes (positive feedback), that sort of thing. Critical, in our case, is a good iOS experience for capturing and sharing short videos and pictures; a process where the parents have to take pictures, log into a web site, create a post, attach an image from the gallery is simply too fussy, especially for the non-technical and mostly overwhelmed parents. Less important is the extended family experience, although alerts would be nice. Privacy is critical; the parents are very concerned about limiting access to the media of their daughter that is shared, so the ability to restrict viewing to logged-in members of the family is important.

FUTO Circles was almost perfect. There was some initial confusion about the difference between circles and groups, but in the end the app experience was great and it accomplished all of the goals -- until it didn't. At some point, half of the already shared media disappeared from the feeds of all of the iOS family members (although the Android user could still see all of the posts). It was a thoroughly discouraging experience, and resulted in a complete lack of faith in the ecosystem. While I believe it might be possible to self-host, by the time we decided that everyone liked it and I was about to look into self-hosting our own family server (and remove the storage restrictions, which hadn't yet been reached when it all fell apart), the iOS app bugs had cropped up and we abandoned the platform.

So there's the requirements we're looking for:

  • The ability to create private, invite-only groups/communities
  • A convenient mobile capture+share experience, which means an app
  • Reactions (emojis) & comment threads
  • Both iOS and Android support, in addition to whatever web interface is available for desktop use

and, given this community, obviously self-hostable.

I have never personally used Facebook, but my understanding is that it's a little different in that communities are really more like individual blogs with some post-level feedback mechanisms; in this way, it's more like Mastodon, where you follow individuals and can respond to their posts, albeit with a loosely-enforced character limit. And as opposed to Lemmy, which while moderated, doesn't really have a main "owner" model. I can imagine setting up a Lemmy instance and creating a community per person, but I feel as if that'd be trying to wedge a square peg into a round hole.

Pixelfed might be the answer, but from my brief encounter with it, it feels more like a photo-oriented Mastodon, then a Facebook wall-style experience (it's Facebook that has "walls", right?).

So back to where I started: in my personal experience, it seems like Friendica might be the best fit, except that I don't use an iPhone and don't know if there are any decent Friendica apps that would satisfy the user experience we're looking for; honestly, I haven't particularly liked any of the Android apps, so I don't hold out much hope for iOS.

Most of the options speak ActivityPub, so maybe I should just focus on finding the right AP-based mobile client? Although, so far the best experience (until it broke) has been Circles, which is based on Matrix.

It's challenging to install and evaluate all of the options, especially when -- in my case -- to properly evaluate the software requires getting several people on each platform to try and see how they like it. I value the community's experience and opinions.

view more: next ›