this post was submitted on 26 Apr 2026
469 points (97.6% liked)

Technology

84143 readers
2414 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Fijxu@programming.dev 10 points 19 hours ago

Really good blogpost, as a sysadmin, this is a great way to handle a migration with zero downtime.

When I was migrating my servers to NixOS I did the same thing, I tried to make my configuration the same as the old OS so everything works cleanly, and it worked fine, but since it was all in the same server, I had to do manual migration for things like files and databases.

[–] carrylex@lemmy.world 37 points 1 day ago* (last edited 1 day ago) (5 children)

Ok so if I'm reading this correctly: They migrated from an OS and MySQL version receiving no updates since at least 2 years to MySQL 8.0 which will stop getting updates in 4 days. Also every service is running without any containerization and there is a single database for everything... and it all runs on a single host and I didn't read one word about a backup strategy or disk encryption. Also not a single word about infrastructure as code like ansible so that you can reliably recreate the system... and The whole stuff is hosted in Germany for a Turkish software company - sounds like very good latency.

My personal conclusion: This system WILL fail and the guy who designed it is stuck somewhere 10-20 years in the past.

[–] jj4211@lemmy.world 2 points 8 hours ago

They migrated from an OS and MySQL version receiving no updates since at least 2 years to MySQL 8.0 which will stop getting updates in 4 days.

I agree that it was an odd choice, as well as the OS. Going to Alma 9 when Alma 10 had already been out for some time. Would think if they wanted the long term updates they would have gone to 10 to get the most out of it. If they went from 8 to 9, sure, some people like staying in the area where RedHat got bored and won't mess with it anymore, but 7 to 9 suggests they didn't do timely upgrades before.

Also every service is running without any containerization and there is a single database for everything

Well, he said explicitly they have 30 databases, though I suppose you meant a single mysql instance. I will say I won't judge one way or another about containerization, as I've seen about as much amateur hour containerization to not immediately judge one way or another on that.

it all runs on a single host

Yeah, that seems pretty dire given his stated usage scenario, and it seems very explicit that their entire internet facing world is that single host...

backup strategy or disk encryption

It was a post narrowly discussing migration, so I don't expect a full inventory of everything they do, so backup strategy and disk encryption and all sorts of other things may be omitted as having nothing to do with the core thing. I guess the most red flag on this front is he explicitly mentions the old setup having "backups enabled" and new setup having "RAID1", which does make me wonder if they think RAID1 is a credible answer for "backup".

Also not a single word about infrastructure as code

Again, not necessarily in-scope for this document, so not sure if I'm going to judge on this one. I routinely take material expressed in terms of an ansible play and "generic it out" for general consumption when discussing with people outside my organization.

The whole stuff is hosted in Germany for a Turkish software company

I'll confess to not liking it being in a single site, however to the extent they select a single site, Germany might make sense because:

Several live mobile apps serving hundreds of thousands of users

Their userbase may be better connected to Germany than Turkey, and the user latency matters more.

My biggest concerns would be mitigated if they said that the German hosted server is their off-prem solution but it is also hosted on-prem giving them multiple sites, but I think that's a bit much to imagine given the process described. The described migration process wouldn't make sense in that scenario.

[–] katze@lemmy.4d2.org 5 points 12 hours ago

the guy who designed it is stuck somewhere 10-20 years in the past.

Well, using containerization for everything is very 2015-ish.

[–] 0x0@lemmy.zip 2 points 12 hours ago (1 children)

My personal conclusion:he knows what he's doing.

[–] jj4211@lemmy.world 3 points 8 hours ago

Sure, though there is a worrying lack of backup and resilience in the described scenario. Has a smell of someone who hasn't been bit yet and not paying attention to best practices in the industry.

I will give a break on some of the things as not necessarily being a 'must', but being hard bound to a singular server strikes me as a disaster waiting to happen.

[–] xthexder@l.sw0.com 11 points 1 day ago* (last edited 1 day ago)

Sounds like my homelab has better redundancy than these guys, and my monthly bill isn't much different than their new one. I only pay for power and networking, since I own my own hardware. I'm colocating in my city, so my latency to home is about 1ms, and I've got a full mirrored server in my house. Certain files are further backed up elsewhere for proper 3-2-1 backup (+ each server running raidz2 with disk encryption). Even if my home Internet goes out, I still have full access to my files at home, and all my public services stay running in the data center. If either server fails, it's all set up with containers so it's easy to spin up each service somewhere else.

One thing that's tricky to get right with disk encryption (especially with encrypted /boot) is having a redundant boot partition. I was able to hack this together by having sofware raid duplicate my boot partition to a second drive. Now if I remove either OS boot drive it falls back to the remaining one. To prevent breaking EFI boot, you need to use the Version 1 RAID format so the metadata is stored at the end of the partition, not the front where EFI reads.

[–] Passerby6497@lemmy.world 6 points 1 day ago (1 children)

every service is running without any containerization and there is a single database for everything... and it all runs on a single host and I didn't read one word about a backup strategy or disk encryption.

Man, a paragraph that can give someone some serious PTSD flashbacks....

The number of times I've had to clean up a customer's environment after they let little Billy play corporate IT and things went boom.....

[–] dreamkeeper@literature.cafe 15 points 1 day ago* (last edited 1 day ago)

Always nice to see people moving away from enshittified US services.

[–] nibbler@discuss.tchncs.de 8 points 1 day ago* (last edited 1 day ago) (1 children)

Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:

rsync -avz --progress /root/mydumper_backup/ root@NEW_SERVER:/root/mydumper_backup/

that's a bit weird. rsync -z is compression, but they did compress in the mydumper export already, so this is a slow down (or neutral at best). also in my experience rsync is as fast as scp is as fast as piping anything to the tcp port on the destination etc. rsync does not win for speed but for enabling resume so to say...

besides this: nice read!

[–] jj4211@lemmy.world 1 points 10 hours ago

It's probably just a knee jerk inclusion.

As you say other solutions may be just as well or marginally better, but this is an all around good approach that even when pointless doesn't hurt anything.

[–] Gonzako@lemmy.world 63 points 1 day ago (5 children)

Me running everything on a single postgress instance on my shitbox 0€/month

[–] Dyskolos@lemmy.zip 39 points 1 day ago (17 children)

0? My energy company says I'm using power equivalent to a family of eight. And it's just wifey , the servers and me. I had cops here asking if I grow weed 😁

So unless you steal power, it surely isn't close to 0 😁

[–] jj4211@lemmy.world 1 points 8 hours ago

My solar covers more than my entire electric bills in the mild weather months.

I was thinking of moving to more modern systems with modest power consumption too. One of the systems I picked up is basically a case-as-a-heatsink with no fans.

[–] 0x0@lemmy.zip 2 points 12 hours ago (1 children)

And it’s just wifey , the servers and me.

So you, wife and your 6 digital kids, checks out.

[–] Dyskolos@lemmy.zip 3 points 11 hours ago

If you put it that way....kinda 😬

[–] wltr@discuss.tchncs.de 20 points 1 day ago (5 children)

I realised I don’t need my servers being online 24/7, so for me that’s Raspberry Pi and equivalents, plus powering on computers on demand.

[–] greybeard@feddit.online 13 points 1 day ago (6 children)

A trick I realized a few years ago: Caddy has a module you can build it with that does WOL. So I was able to run a Caddy reverse proxy that woke up my higher powered server on demand, and let it go back to sleep when I wasn't using it. Might be a bad idea for a database sever, but for my uses it was pretty simple and effective.

[–] W98BSoD@lemmy.dbzer0.com 5 points 1 day ago

… that woke up my higher powered server on demand, and let it go back to sleep when I wasn't using it.

Get a load of this guy not using his high powered server 24/7/365.

load more comments (5 replies)
load more comments (4 replies)
load more comments (14 replies)
[–] Armand1@lemmy.world 17 points 1 day ago (8 children)

I also self host and I wouldn't say the cost is zero. In the UK, energy costs alone mean that a 40W computer cost £8 per month to run (assuming a 28p/kWh price).

Of course, that's assuming you run it 24/7 at full energy use, but I know my PCs run on more than that.

load more comments (8 replies)
load more comments (3 replies)
[–] Wispy2891@lemmy.world 17 points 1 day ago (10 children)

Not a sysadmin but just an hobbyist: is it ok to have such a large install bare metal and not containerized?

For example the issue of MySQL 5 being unavailable would be a non-issue with a container

[–] jj4211@lemmy.world 2 points 10 hours ago* (last edited 5 hours ago) (1 children)

One thing is that I don't know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I'm doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.

For another, I'll say that I've probably seen more people getting screwed up because they didn't understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can't understand it. Also when they kind of mindlessly divide a flow for "microservices", they get lost in the debug.

They are useful, but I think people might do a lot better if they:

  • More carefully considered how they split things up
  • Go ahead and use host networking, it's pretty good
  • unix domain sockets can be your friend instead of binding to tcp for everything. I much favor reverse proxy to unix domain instead of handling IP/ports, which is what the container networks buy most people but the flow is too gnarly
  • Be wary of random dockerhub "appliances", they tend to be poorly maintained.

If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn't like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always "virtualenv" like, but even worse for fickle dependencies.

[–] Wispy2891@lemmy.world 1 points 7 hours ago (1 children)

One thing is that I don’t know for sure if it is containerized or not

They wrote:

Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7

If they used some kind of containerization, the native packages available for the hosts do not affect the specific version of MySQL that they want to use

[–] jj4211@lemmy.world 1 points 5 hours ago

I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)... But you do raise a pretty good indicator that at least a key thing is not running in container.

But I'm not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I've seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because 'they work'.

In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.

[–] Kissaki@feddit.org 4 points 15 hours ago

Totally fine. Containerization comes at a cost too. It's a matter of system design, knowing your risks and complexities, and handling them accordingly.

With such a size, before containerization I'm wondering if these services are not independent enough to split them onto multiple servers.

Having everything together reduce system complexities in some ways, but not in other ways.

[–] Evotech@lemmy.world 1 points 13 hours ago

Baremetal is definitively making a comeback. But you should have some way of orchestrating the deployment as code regardless.

[–] EncryptKeeper@lemmy.world 8 points 1 day ago* (last edited 1 day ago)

Yes it’s ok, in general. It’s not the most modern or efficient way of managing infrastructure but it’s worked for decades now. It all depends on what you’re hosting, for who, and for how many people.

If you’re hosting internal company infrastructure for a relatively static number of users in a single of set few regions to deliver to, bare metal monolithic stuff is absolutely fine. It’s when you’re an app or service company and your infrastructure is for the back end for a public service that needs to be able to scale dynamically, and you’re worried about high 24/7 uptime, and latency to end users is a global issue that things like microservice architecture, containerization, and iac starts becoming important.

The whole containerization crazy is important for microservices architecture where you split your app into different pieces. This lets you scale different parts of you app as needed, it prevents your entire app from failing just because one part of it failed, it allows for lifecycle management like blue/green deployments with no downtime, allows developers to work on different parts of the app and update at a faster cadence than one big release for the entire thing every time you update one small part of it, things like that.

[–] raspberriesareyummy@lemmy.world 5 points 1 day ago (1 children)

For example the issue of MySQL 5 being unavailable would be a non-issue with a container

So people careless enough to "just container it" for old, possibly security-compromised software - you call that a "non-issue"? How about upgrading and configuring for compatibility?

[–] Wispy2891@lemmy.world 2 points 21 hours ago (2 children)

They're the ones running a 10 years old database on a 11 years old os in a public facing server "because it just works", not me

If it was a container, they could just tag a new version when the database went EOL 5 years ago, without being locked on what the package manager was offering

Because they used MySQL 5 on CentOS 7 from the package manager and couldn't easily upgrade

[–] raspberriesareyummy@lemmy.world 1 points 9 hours ago

They’re the ones running a 10 years old database on a 11 years old os in a public facing server “because it just works”, not me

My point was that they upgraded to a newer database (also old, but newer), which is arguably better than containerization.

[–] Evotech@lemmy.world 1 points 12 hours ago

With this small of a deployment you’re just moving your issue to the containerisation layer. Unless you use some saas kubernetes or other managed solution.

load more comments (5 replies)
[–] uuj8za@piefed.social 46 points 1 day ago (14 children)

I'm in the US and when I tried migrating from DO to Hetzner, I got asked to upload my passport to prove I'm not spam or something. Same experience with OVH.

Is this a thing for all European hosting companies? I ended up finding some Canadian hosting that would just let me sign up and pay like normal.

[–] rjek@feddit.uk 29 points 1 day ago (1 children)

Lots of respectable EU hosting companies, and also aparently OVH, if they think there's a chance you're taking the piss will ask for a ID so they can ban you. It's not just anti-spam, it's anti-abuse and for preventing non-payment. They think there was a risk involved in accepting your business (whatever that may be, obviously companies don't dilvulge their criteria here), and if you go elsewhere they're not upset about it for that reason.

load more comments (1 replies)

When I signed up at Hetzner, I had to go through the same anti-abuse check. However I could choose to not upload my ID and pre-pay 20€ instead. Did that and have been a happy customer since.

load more comments (12 replies)
[–] SMillerNL@piefed.social 19 points 1 day ago (1 children)

Seems like it would have been a good moment to split the database from the many web servers and reduce the single point of failure.

load more comments (1 replies)
load more comments
view more: next ›