this post was submitted on 02 May 2026
34 points (94.7% liked)

Programming

26761 readers
215 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] ISO@lemmy.zip 5 points 11 hours ago

There is no substitute for the static analyzer within the compiler informed by the type system. Near-zero bugs require provable static analysis that guarantees preventing a certain bug class, i.e. (safe) Rust for the bug classes it guarantees preventing. Hopefully, future languages with even better type systems will help with even more bug classes, or incrementally improves on what Rust currently has to offer.

C code simply doesn't have enough info for an external tool to push bugs down to near-zero count. This is also exactly the point of struggle that lead to complete failure in delivering guaranteed safety to C++.

There has been murmurings, mainly from non-technical people, about how "AI" will render advancements in safer type systems nearly useless, because the magic ^(mushroom)^ AI will just find all the issues in code written in older languages. What they don't realize is that the effect will be reversed. Many established projects that come with a high reputation, and a veneer of maturity, indestructibility, and meticulousness will simply, and perhaps unfairly, lose that perception under the continuous barrage of potentially high impact bugs and vulnerabilities surfaced by these tools, with not enough human bandwidth to keep up with them, and with new code susceptible to the same problems repeating over and over. This will effectively lead to an even harder push for adopting technologies that prevent a good chunk of these bugs from ever happening at any point, not the other way around.

[–] maegul@lemmy.ml 2 points 16 hours ago (1 children)

Anyone ride wonder if there’ll be an asymmetry between the ease/speed at which bugs/vulnerabilities are found and at which they’ll be fixed with AI systems?

That is, AI assistance may find and exploit bugs more easily than it can fix them?

[–] StripedMonkey@lemmy.zip 2 points 12 hours ago

It's hard for me to imagine which direction it would end up going when right now the quality of AI output is so varied.