solrize

joined 2 years ago
[–] solrize@lemmy.world 4 points 1 hour ago

They've done that on and off for ages, and the ones being offered with Ubuntu here are mostly pretty expensive or else not so interesting. I've been content to buy older Thinkpads and self-install Debian for my past several laptops. I was somewhat tempted by recent Ideapad Yogas but resisted, and since then, prices have gone up, whether due to tariffs or whatever else.

[–] solrize@lemmy.world 21 points 10 hours ago

Actual difficult instances of TSP are pretty rare, and for something like Uber Eats, it's fine if your route is 2% worse than the mathematical optimum. Traffic fluctuations probably matter more than having the shortest route.

There are many good heuristics for TSP that might not give you the optimal solution, but that will generally come pretty close. The Wikipedia article probably describes some of these.

[–] solrize@lemmy.world 9 points 2 days ago

Mozilla propaganda. It's not just about individually identifiable data. Privacy means not giving the bad guys ANY data, whether or not it points at any individual.

[–] solrize@lemmy.world 6 points 4 days ago (2 children)

How much do you expect to pay for the 24 NVMe disks?

[–] solrize@lemmy.world 21 points 4 days ago

It's possible for a while but there is a whack-a-mole game if you're doing anything they would care about. So you will have to keep moving it around. VPS forums will have some info.

[–] solrize@lemmy.world 54 points 1 week ago (9 children)

This is about "Chris Krebs, the former head of the US Cybersecurity and Infrastructure Security Agency (CISA) and a longtime Trump target".

[–] solrize@lemmy.world 5 points 1 week ago* (last edited 1 week ago)

Oh I didn't know about the new requirements. Less backwards compatibility too. IBM 3592 looks better but costs even more. Tape drives can't be that much higher tech than HDDs, so if they cranked up the volume they could likely be way more affordable.

[–] solrize@lemmy.world 21 points 1 week ago (3 children)

The upfront cost of tape is excessive though. It wasn't always like that. And LTO-9 missed its capacity target: it's 18TB (1.5x LTO-8) instead of 24TB as planned. Who knows what will happen later in the roadmap.

[–] solrize@lemmy.world 2 points 1 week ago

Are you familiar with git hooks? See

https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks

Scroll to the part about server side hooks. The idea is to automatically propagate updates when you receive them. So git-level replication instead of rsync.

[–] solrize@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (2 children)

I see, fair enough. Replication is never instantaneous, so do you have definite bounds on how much latency you'll accept? Do you really want independent git servers online? Most HA systems have a primary and a failover, so users only see one server. If you want to use Ceph, in practice all servers would be in the same DC. Is that ok?

I think I'd look in one of the many git books out there to see what they say about replication schemes. This sounds like something that must have been done before.

[–] solrize@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (4 children)

Why do you want 5 git servers instead of, say, 2? Are you after something more than high availability? Are you trying to run something like GitHub where some repos might have stupendous concurrent read traffic? What about update traffic?

What happens if the servers sometimes get out of sync for 0.5 sec or whatever, as long as each is in a consistent state at all times?

Anyway my first idea isn't rsync, but rather, use update hooks to replicate pushes to the other servers, so the updates will still look atomic to clients. Alternatively, use a replicated file system under Ceph or the like, so you can quickly migrate failed servers. That's a standard cloud hosting setup.

What real world workload do you have, that appeared suddenly enough that your devs couldn't stay in top of it, and you find yourself seeking advice from us relatively clueless dweebs on Lemmy? It's not a problem most git users deal with. Git is pretty fast and most users are ok with a single server and a backup.

[–] solrize@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (6 children)

I wonder if you could use HAProxy for that. It's usually used with web servers. This is a pretty surprising request though, since git is pretty fast. Do you have an actual real world workload that needs such a setup? Otherwise why not just have a normal setup with one server being mirrored, and a failover IP as lots of VPS hosts can supply?

And, can you use round robin DNS instead of a load balancer?

view more: next ›