Yes, ask why it deleted data when it didn't do anything of the sort and it will still output similar text. You asked it to confess and explain, so it will do just that regardless of whether it fits.
jj4211
I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)... But you do raise a pretty good indicator that at least a key thing is not running in container.
But I'm not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I've seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because 'they work'.
In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.
My solar covers more than my entire electric bills in the mild weather months.
I was thinking of moving to more modern systems with modest power consumption too. One of the systems I picked up is basically a case-as-a-heatsink with no fans.
Sure, though there is a worrying lack of backup and resilience in the described scenario. Has a smell of someone who hasn't been bit yet and not paying attention to best practices in the industry.
I will give a break on some of the things as not necessarily being a 'must', but being hard bound to a singular server strikes me as a disaster waiting to happen.
They migrated from an OS and MySQL version receiving no updates since at least 2 years to MySQL 8.0 which will stop getting updates in 4 days.
I agree that it was an odd choice, as well as the OS. Going to Alma 9 when Alma 10 had already been out for some time. Would think if they wanted the long term updates they would have gone to 10 to get the most out of it. If they went from 8 to 9, sure, some people like staying in the area where RedHat got bored and won't mess with it anymore, but 7 to 9 suggests they didn't do timely upgrades before.
Also every service is running without any containerization and there is a single database for everything
Well, he said explicitly they have 30 databases, though I suppose you meant a single mysql instance. I will say I won't judge one way or another about containerization, as I've seen about as much amateur hour containerization to not immediately judge one way or another on that.
it all runs on a single host
Yeah, that seems pretty dire given his stated usage scenario, and it seems very explicit that their entire internet facing world is that single host...
backup strategy or disk encryption
It was a post narrowly discussing migration, so I don't expect a full inventory of everything they do, so backup strategy and disk encryption and all sorts of other things may be omitted as having nothing to do with the core thing. I guess the most red flag on this front is he explicitly mentions the old setup having "backups enabled" and new setup having "RAID1", which does make me wonder if they think RAID1 is a credible answer for "backup".
Also not a single word about infrastructure as code
Again, not necessarily in-scope for this document, so not sure if I'm going to judge on this one. I routinely take material expressed in terms of an ansible play and "generic it out" for general consumption when discussing with people outside my organization.
The whole stuff is hosted in Germany for a Turkish software company
I'll confess to not liking it being in a single site, however to the extent they select a single site, Germany might make sense because:
Several live mobile apps serving hundreds of thousands of users
Their userbase may be better connected to Germany than Turkey, and the user latency matters more.
My biggest concerns would be mitigated if they said that the German hosted server is their off-prem solution but it is also hosted on-prem giving them multiple sites, but I think that's a bit much to imagine given the process described. The described migration process wouldn't make sense in that scenario.
One thing is that I don't know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I'm doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.
For another, I'll say that I've probably seen more people getting screwed up because they didn't understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can't understand it. Also when they kind of mindlessly divide a flow for "microservices", they get lost in the debug.
They are useful, but I think people might do a lot better if they:
- More carefully considered how they split things up
- Go ahead and use host networking, it's pretty good
- unix domain sockets can be your friend instead of binding to tcp for everything. I much favor reverse proxy to unix domain instead of handling IP/ports, which is what the container networks buy most people but the flow is too gnarly
- Be wary of random dockerhub "appliances", they tend to be poorly maintained.
If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn't like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always "virtualenv" like, but even worse for fickle dependencies.
It's probably just a knee jerk inclusion.
As you say other solutions may be just as well or marginally better, but this is an all around good approach that even when pointless doesn't hurt anything.
Yes, but the point is their dying is more of a distribution.
Nah, the producers of human slop are ecstatic because now they can just prompt up their slop and post something for engagement, before they had to at least put in a modicum of effort to make their slop. It would take at least as long to make the human slop as a human would take to view it, now they can get output with even less than the effort the human wastes seeing it.
The slop flood gates are open.
There is a scenario where I would prefer this outcome.
All too often in a meeting things start spinning because they have decided to do stuff, but have uncertainty, so they keep going around in circles speculating on what might go wrong and speculatively worry about everything.
Take a break and then we will continue your precious meeting after we actually know what didn't pan out. 95% of the time the follow up would be "it went fine, didn't need endless contingency plans".
Is a bit hyperbole at the moment, where the concrete lawd are basically "os asks user for age on honor system and relays that to websites". Linux distros can add that without much real controversy.
Proven is some are seeking laws that require the os to actually verify age, which in practice means locking things behind something like a Google account and having an online account vendor process your real identity and really validate your age. Under such a regime, Linux desktop as it exists today becomes infeasible. Also Microsoft can say they absolutely cannot allow local accounts anymore by law and force Microsoft accounts...
My electric bill last month was $15. It eventually is financially worth it.
Don't know how hard you went on your home lab, I use office rather than datacenter equipment and it's quite and plenty for my needs. For my professional test/dev needs I have such equipment at work, so I don't need to home train on gear for the sake of competence in the field.