this post was submitted on 28 Mar 2026
162 points (90.1% liked)
Technology
83990 readers
3403 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What I think you are also seeing is AI sucking at some things and doing better than humans in others.
AI is pretty great at adding unit tests to code, for example, where humans do a just-OK job. Or in writing code for a very direct well scoped small problem.
AI is just OK at understanding product nuance and choices during larger implementations, or getting end to end coding right for any complex use cases.
Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?
Because development direction in open source software is decided by engineers instead of corporate politicians
Do you have a reason to assume that the outsiders using proprietary insider software will do it better, or is that just hope and prayer?
What do you mean? That's the opposite of my point.
I use Linux with ton of open source software since a decade, and it only got better, while Microsoft policies only got worse. It's a direct empiric evidence that it works.
The AI models are the proprietary outsider software. Which is my point
And? LLM is nothing more than a tool. It has nothing to do with politics.
You just said using proprietary outsider software was the opposite of good, now you've flipped to call it just a tool? Pick an opinion.
You have problem with distinction between operating system development and tools used to develop it?