this post was submitted on 09 Oct 2025
387 points (96.9% liked)
Technology
75792 readers
2967 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The article is very much off point.
The main issue is the software crisis: Hardware performance follows moore's law, developer performance is mostly constant.
If the memory of your computer is counted in bytes without a SI-prefix and your CPU has maybe a dozen or two instructions, then it's possible for a single human being to comprehend everything the computer is doing and to program it very close to optimally.
The same is not possible if your computer has subsystems upon subsystems and even the keyboard controller has more power and complexity than the whole apollo programs combined.
So to program exponentially more complex systems we would need exponentially more software developer budget. But since it's really hard to scale software developers exponentially, we've been trying to use abstraction layers to hide complexity, to share and re-use work (no need for everyone to re-invent the templating engine) and to have clear boundries that allow for better cooperation.
That was the case way before electron already. Compiled languages started the trend, languages like Java or C# deepened it, and using modern middleware and frameworks just increased it.
OOP complains about the chain "React → Electron → Chromium → Docker → Kubernetes → VM → managed DB → API gateways". But he doesn't even consider that even if you run "straight on bare metal" there's a whole stack of abstractions in between your code and the execution. Every major component inside a PC nowadays runs its own separate dedicated OS that neither the end user nor the developer of ordinary software ever sees.
But the main issue always reverts back to the software crisis. If we had infinite developer resources we could write optimal software. But we don't so we can't and thus we put in abstraction layers to improve ease of use for the developers, because otherwise we would never ship anything.
If you want to complain, complain to the mangers who don't allocate enough resources and to the investors who don't want to dump millions into the development of simple programs. And to the customers who aren't ok with simple things but who want modern cutting edge everything in their programs.
In the end it's sadly really the case: Memory and performance gets cheaper in an exponential fashion, while developers are still mere humans and their performance stays largely constant.
So which of these two values SHOULD we optimize for?
The real problem in regards to software quality is not abstraction layers but "business agile" (as in "business doesn't need to make any long term plans but can cancel or change anything at any time") and lack of QA budget.
THANK YOU.
I migrated services from LXC to kubernetes. One of these services has been exhibiting concerning memory footprint issues. Everyone immediately went "REEEEEEEE KUBERNETES BAD EVERYTHING WAS FINE BEFORE WHAT IS ALL THIS ABSTRACTION >:(((((".
I just spent three months doing optimization work. For memory/resource leaks in that old C codebase. Kubernetes didn't have fuck-all to do with any of those (which is obvious to literally anyone who has any clue how containerization works under the hood). The codebase just had very old-fashioned manual memory management leaks as well as a weird interaction between jemalloc and RHEL's default kernel settings.
The only reason I spent all that time optimizing and we aren't just throwing more RAM at the problem? Due to incredible levels of incompetence business-side I'll spare you the details of, our 30 day growth predictions have error bars so many orders of magnitude wide that we are stuck in a stupid loop of "won't order hardware we probably won't need but if we do get a best-case user influx the lead time on new hardware is too long to get you the RAM we need". Basically the virtual price of RAM is super high because the suits keep pinky-promising that we'll get a bunch of users soon but are also constantly wrong about that.
Are you crazy? Profit goes to shareholders, not to invest in the project. Get real.
I agree with the general idea of the article, but there are a few wild takes that kind of discredit it, in my opinion.
"Imagine the calculator app leaking 32GB of RAM, more than older computers had in total" - well yes, the memory leak went on to waste 100% of the machine's RAM. You can't leak 32GB of RAM on a 512MB machine. Correct, but hardly mind-bending.
"But VSCodium is even worse, leaking 96GB of RAM" - again, 100% of available RAM. This starts to look like a bad faith effort to throw big numbers around.
"Also this AI 'panicked', 'lied' and later 'admitted it had a catastrophic failure'" - no it fucking didn't, it's a text prediction model, it cannot panic, lie or admit something, it just tells you what you statistically most want to hear. It's not like the language model, if left alone, would have sent an email a week later to say it was really sorry for this mistake it made and felt like it had to own it.
32gb swap file or crash. Fair enough point that you want to restart computer anyway even if you have 128gb+ ram. But calculator taking 2 years off of your SSD's life is not the best.
It's a bug and of course it needs to be fixed. But the point was that a memory leak leaks memory until it's out of memory or the process is killed. So saying "It leaked 32GB of memory" is pointless.
It's like claiming that a puncture on a road bike is especially bad because it leaks 8 bar of pressure instead of the 3 bar of pressure a leak on a mountain bike might leak, when in fact both punctures just leak all the pressure in the tire and in the end you have a bike you can't use until you fixed the puncture.
Yeah, that's quite on point. Memory leaks until something throws an out of memory error and crashes.
What makes this really seam like a bad faith argument instead of a simple misunderstanding is this line:
OOP seems to understand (or at least claims to understand) the difference between allocating (and wasting) memory on purpose and a leak that just fills up all available memory.
So what does he want to say?
Yeah what I hate that agile way of dealing with things. Business wants prototypes ASAP but if one is actually deemed useful, you have no budget to productisize it which means that if you don't want to take all the blame for a crappy app, you have to invest heavily in all of the prototypes. Prototypes who are called next gen project, but gets cancelled nine times out of ten 🤷🏻♀️. Make it make sense.
This. Prototypes should never be taken as the basis of a product, that's why you make them. To make mistakes in a cheap, discardible format, so that you don't make these mistake when making the actual product. I can't remember a single time though that this was what actually happened.
They just label the prototype an MVP and suddenly it's the basis of a new 20 year run time project.
In my current job, they keep switching around everything all the time. Got a new product, super urgent, super high-profile, highest priority, crunch time to get it out in time, and two weeks before launch it gets cancelled without further information. Because we are agile.
MAXIMUM ARMOR
Shit, my GPU is about to melt!