this post was submitted on 12 Sep 2025
1027 points (98.8% liked)

Technology

75095 readers
2701 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

you are viewing a single comment's thread
view the rest of the comments
[–] poopkins@lemmy.world 59 points 1 day ago (5 children)

As an engineer, it's honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.

[–] rozodru@piefed.social 14 points 1 day ago (1 children)

as someone who now does consultation code review focused purely on AI...nah let them continue drilling holes in their ship. I'm booked solid for the next several months now, multiple clients on the go, and i'm making more just being a digital janitor what I was as a regular consultant dev. I charge a premium to just simply point said sinking ship to land.

Make no mistake though this is NOT something I want to keep doing in the next year or two and I honestly hope these places figure it out soon. Some have, some of my clients have realized that saving a few bucks by paying for an anthropic subscription, paying a junior dev to be a prompt monkey, while firing the rest of their dev team really wasn't worth it in the long run.

the issue now is they've shot themselves in the foot. The AI bit back. They need devs, and they can't find them because putting out any sort of ad for hiring results in hundreds upon hundreds of bullshit AI generated resumes from unqualified people while the REAL devs get lost in the shuffle.

[–] MangoCats@feddit.it 2 points 23 hours ago

while firing the rest of their dev team

That's the complete mistake right there. AI can help code, it can't replace the organizational knowledge your team has developed.

Some shops may think they don't have/need organizational knowledge, but they all do. That's one big reason why new hires take so long to start being productive.

[–] Blackmist@feddit.uk 14 points 1 day ago

Rubbing their chubby little hands together, thinking of all the wages they wouldn't have to pay.

[–] expr@programming.dev 9 points 1 day ago (3 children)

Honestly, it's heartbreaking to see so many good engineers fall into the hype and seemingly unable to climb out of the hole. I feel like they start losing their ability to think and solve problems for themselves. Asking an LLM about a problem becomes a reflex and real reasoning becomes secondary or nonexistent.

Executives are mostly irrelevant as long as they're not forcing the whole company into the bullshit.

[–] Mniot@programming.dev 1 points 55 minutes ago (1 children)

Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.

I'm seeing a lot of this, though. Like, I'm not technically required to use AI, but the VP will send me a message noting that I've only used 2k tokens this month and maybe I could get more done if I was using more...?

[–] expr@programming.dev 1 points 38 minutes ago

Yeah, fortunately while our CTO is giddy like a schoolboy about LLMs, he hasn't actually attempted to force it on anyone, thankfully.

Unfortunately, a number of my peers now seem to have become irreparably LLM-brained.

[–] jj4211@lemmy.world 6 points 1 day ago

Based on my experience, I'm skeptical someone that seemingly delegates their reasoning to an LLM were really good engineers in the first place.

Whenever I've tried, it's been so useless that I can't really develop a reflex, since it would have to actually help for me to get used to just letting it do it's thing.

Meanwhile the people who are very bullish who are ostensibly the good engineers that I've worked with are the people who became pet engineers of executives and basically have long succeeded by sounding smart to those executives rather than doing anything or even providing concrete technical leadership. They are more like having something akin to Gartner on staff, except without even the data that at least Gartner actually gathers, even as Gartner is a useless entity with respect to actual guidance.

[–] auraithx@lemmy.dbzer0.com -3 points 1 day ago (2 children)

I mean before we'd just ask google and read stack, blogs, support posts, etc. Now it just finds them for you instantly so you can just click and read them. The human reasoning part is just shifting elsewhere where you solve the problem during debugging before commits.

[–] expr@programming.dev 9 points 1 day ago (1 children)

No, good engineers were not constantly googling problems because for most topics, either the answer is trivial enough that experienced engineers could answer them immediately, or complex and specific enough to the company/architecture/task/whatever that Googling it would not be useful. Stack overflow and the like has always only ever really been useful as the occasional memory aid for basic things that you don't use often enough to remember how to do. Good engineers were, and still are, reasoning through problems, reading documentation, and iteratively piecing together system-level comprehension.

The nature of the situation hasn't changed at all: problems are still either trivial enough that an LLM is pointless, or complex and specific enough that an LLM will get it wrong. The only difference is that an LLM will spit out plausible-sounding bullshit and convince people it's valuable when it is, in fact, not.

[–] auraithx@lemmy.dbzer0.com -1 points 1 day ago (1 children)

In the case of a senior engineer then they wouldn’t need to worry about the hallucination rate. The LLM is a lot faster than them and they can do other tasks while it’s being generated and then review the outputs. If it’s trivial you’ve saved time, if not, you can pull up that documentation, and reason and step through the problem with the LLM. If you actually know what you’re talking about you can see when it slips up and correct it.

And that hallucination rate is rapidly dropping. We’ve jumped from about 40% accuracy to 90% over the past ~6mo alone (aider polygot coding benchmark) - at about 1/10th the cost (iirc).

[–] Feyd@programming.dev 8 points 1 day ago (1 children)

it’s trivial you’ve saved time, if not, you can pull up that documentation, and reason and step through the problem with the LLM

Insane that just writing the code isn't even an option in your mind

[–] auraithx@lemmy.dbzer0.com 0 points 9 hours ago (2 children)

That isn’t the discussion at hand. Insane you don’t realise that.

[–] expr@programming.dev 1 points 41 minutes ago

It is, actually. The entire point of what I was saying is that you have all these engineers now that reflexively jump straight to their LLM for anything and everything. Using their brains to simply write some code themselves doesn't even occur to them as an something they should do. Much like you do, by the sounds of it.

[–] Feyd@programming.dev 3 points 1 day ago

"Stack overflow engineer" has been a derogatory forever lol

[–] pycorax@sh.itjust.works 4 points 1 day ago

A tale as old as time...

[–] Feyd@programming.dev 2 points 1 day ago

Did you think executives were smart? What's really heartbreaking is how many engineers did. I even know some that are pretty good that tell me how much more productive they are and all about their crazy agent setups (from my perspective i don't see any more productivity)