this post was submitted on 12 Sep 2025
1029 points (98.8% liked)

Technology

75095 readers
2701 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

you are viewing a single comment's thread
view the rest of the comments
[–] cupcakezealot@piefed.blahaj.zone 55 points 1 day ago (2 children)

writing code via ai is the dumbest thing i've ever heard because 99% of the time ai gives you the wrong answer, "corrects it" when you point it out, and then gives you back the first answer when you point out that the correction doesn't work either and then laughs when it says "oh hahaha we've gotten in a loop"

[–] cows_are_underrated@feddit.org 25 points 1 day ago (1 children)

You can use AI to generate code, but from my experience its quite literally what you said. However, what I have to admit is, that its quite good at finding mistakes in your code. This is especially useful, when you dont have that much experience and are still learning. Copy paste relevant code and ask why its not working and in quite a lot of cases you get an explanation what is not working and why it isn't working. I usually try to avoid asking an AI and find an answer on google instead, but this does not guarantee an answer.

[–] ngdev@lemmy.zip 5 points 1 day ago (1 children)

if your code isnt working then use a debugger? code isnt magic lmao

[–] cows_are_underrated@feddit.org 1 points 1 day ago (1 children)

As I already stated, AI is my last resort. If something doesn't work because it has a logical flaw googeling won't save me. So of course I debug it first, but if I get an Error I have no clue where it comes from no amount of debugging will fix the problem, because probably the Error occurred because I do not know better. I Am not that good of a coder and I Am still learning a lot on a regular basis. And for people like me AI is in fact quite usefull. It has basically become the replacement to pasting your code and Error into stack overflow (which doesn't even work for since I always get IP banned when trying to sign up)

[–] ngdev@lemmy.zip 2 points 1 day ago (3 children)

you never stated you use it as a last resort. you're basically using ai as a rubber ducky

[–] Mniot@programming.dev 1 points 1 hour ago

More as an alternative to a search engine.

In my ideal world, StackOverflow would be a public good with a lot of funding and no ads/sponsorship.

Since that's not the case, and everything is hopelessly polluted with ads and SEO, LLMs are momentarily a useful tool for getting results. Their info might be only 3/4 correct, but my search results are also trash. Who knows what people will do in a year when the LLMs have been eating each others slop and are also being stuffed with ads by their owners.

[–] cheloxin@lemmy.ml 5 points 1 day ago (1 children)

I usual try to avoid..

Just because they didn't explicitly say the exact words you did doesn't mean it wasn't said

[–] ngdev@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

trying to avoid something also doesnt mean that the thing youre avoiding is a last resort. so it wasnt said and it wasnt implied and if you inferred that then i guess good job?

[–] MangoCats@feddit.it 2 points 23 hours ago

I am a firm believer in rubber ducky debugging, but AI is clearly better than the rubber duck. You don't depend on either to do it for you, but as long as you have enough self-esteem to tell AI to stick it where the sun don't shine when you know it's wrong, it can help accelerate small tasks from a few hours down to a few minutes.

[–] BrianTheeBiscuiteer@lemmy.world 7 points 1 day ago (2 children)

Or you give it 3-4 requirements (e.g. prefer constants, use ternaries when possible) and after a couple replies it forgets a requirement, you set it straight, then it immediately forgets another requirement.

[–] MangoCats@feddit.it 1 points 23 hours ago

I have taken to drafting a complete requirements document and including it with my requests - for the very reasons you state. it seems to help.

[–] WhiskyTangoFoxtrot@lemmy.world -1 points 1 day ago (1 children)

To be fair, I've had the same results working with human freelancers. At least AI is cheaper.

[–] MangoCats@feddit.it 1 points 23 hours ago

Same, and AI isn't as frustrating to deal with when it can't do what it was hired for and your manager needs you to now find something it can do because the contract is funded...