this post was submitted on 12 Sep 2025
1028 points (98.8% liked)

Technology

75095 readers
2701 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] poopkins@lemmy.world 59 points 1 day ago (16 children)

As an engineer, it's honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.

[–] rozodru@piefed.social 14 points 1 day ago (1 children)

as someone who now does consultation code review focused purely on AI...nah let them continue drilling holes in their ship. I'm booked solid for the next several months now, multiple clients on the go, and i'm making more just being a digital janitor what I was as a regular consultant dev. I charge a premium to just simply point said sinking ship to land.

Make no mistake though this is NOT something I want to keep doing in the next year or two and I honestly hope these places figure it out soon. Some have, some of my clients have realized that saving a few bucks by paying for an anthropic subscription, paying a junior dev to be a prompt monkey, while firing the rest of their dev team really wasn't worth it in the long run.

the issue now is they've shot themselves in the foot. The AI bit back. They need devs, and they can't find them because putting out any sort of ad for hiring results in hundreds upon hundreds of bullshit AI generated resumes from unqualified people while the REAL devs get lost in the shuffle.

[–] MangoCats@feddit.it 2 points 23 hours ago

while firing the rest of their dev team

That's the complete mistake right there. AI can help code, it can't replace the organizational knowledge your team has developed.

Some shops may think they don't have/need organizational knowledge, but they all do. That's one big reason why new hires take so long to start being productive.

[–] Blackmist@feddit.uk 14 points 1 day ago

Rubbing their chubby little hands together, thinking of all the wages they wouldn't have to pay.

load more comments (14 replies)
[–] chaosCruiser@futurology.today 131 points 1 day ago (1 children)

When the CEO of a tech company says that in x months this and that will happen, you know it’s just musk talk.

[–] Tollana1234567@lemmy.today 14 points 1 day ago (1 children)

more like 6 months" because we need the VC funds still"

load more comments (1 replies)
[–] vane@lemmy.world 44 points 1 day ago (1 children)

It is writing 90% of code, 90% of code that goes to trash.

[–] Dremor@lemmy.world 15 points 1 day ago (1 children)

Writing 90% of the code, and 90% of the bugs.

[–] Gutek8134@lemmy.world 11 points 1 day ago (5 children)

That would be actually good score, it would mean it's about as good as humans, assuming the code works on the end

load more comments (5 replies)

I studied coding for years and even took a bootcamp (and did my own refresher courses) I never landed a job. One thing that AI can do for me is help me in troubleshooting or some minor boilerplate code but not to do the job for me. I will be a hobbyist and hopefully aid in open source projects some day....any day now!

[–] PieMePlenty@lemmy.world 25 points 1 day ago* (last edited 1 day ago) (1 children)

Its to hype up stock value. I don't even take it seriously anymore. Many businesses like these are mostly smoke and mirrors, oversell and under deliver. Its not even exclusive to tech, its just easier to do in tech. Musk says FSD is one year away. The company I worked for "sold" things we didn't even make and promised revenue that wasn't even economically possible. Its all the same spiel.

[–] Doomsider@lemmy.world 5 points 1 day ago

Workers would be fired if they lie about their production or abilities. Strange that the leaders are allowed to without consequences.

[–] confuser@lemmy.zip 4 points 22 hours ago

Ai writes 90% of my code...i don't code much.

[–] melsaskca@lemmy.ca 16 points 1 day ago (1 children)

Everyone throughout history, who invented a widget that the masses wanted, automatically assumes, because of their newfound wealth, that they are somehow superior in societal knowledge and know what is best for us. Fucking capitalism. Fucking billionaires.

load more comments (1 replies)
[–] inclementimmigrant@lemmy.world 13 points 1 day ago (4 children)

My company and specifically my team are looking at incorporating AI as a supplement to our coding.

We looked at the code produced and determined that it's of the quality of a new hire. However we're going in with eyes wide open, and for me skeptical AF, going to try to use it in a limited way to help relieve some of the burdens of our SW engineers, not replace. I'm leading up the usage of writing out unit tests because none of us particularly like writing unit tests and it's got a very nice, easy, established pattern that the AI can follow.

[–] UnderpantsWeevil@lemmy.world 8 points 23 hours ago (6 children)

We looked at the code produced and determined that it’s of the quality of a new hire.

As someone who did new hire training for about five years, this is not what I'd call promising.

load more comments (6 replies)
load more comments (3 replies)
[–] psycho_driver@lemmy.world 30 points 1 day ago (4 children)

The good news is that AI is at a stage where it's more than capable of doing the CEO of Anthropic's job.

load more comments (4 replies)
[–] resipsaloquitur@lemmy.world 63 points 1 day ago (3 children)

Code has to work, though.

AI is good at writing plausible BS. Good for scams and call centers.

[–] Salvo@aussie.zone 34 points 1 day ago (1 children)
load more comments (1 replies)
load more comments (2 replies)
[–] Aceticon@lemmy.dbzer0.com 27 points 1 day ago

It's almost as if they shamelessly lie...

[–] Catoblepas@piefed.blahaj.zone 35 points 1 day ago (4 children)

developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

I’m going to become whatever the gay version of Amish is.

load more comments (4 replies)
[–] merc@sh.itjust.works 35 points 1 day ago (3 children)

Does it count if an LLM is generating mountains of code that then gets thrown away? Maybe he can win the prediction on a technicality.

load more comments (3 replies)
[–] leftzero@lemmy.dbzer0.com 30 points 1 day ago* (last edited 1 day ago) (2 children)

I'm fairly certain it is writing 90% of Windows updates, at least...

load more comments (2 replies)
[–] ThePowerOfGeek@lemmy.world 44 points 1 day ago (3 children)

It's almost like he's full of shit and he's nothing but a snake oil salesman, eh.

They've been talking about replacing software developers with automated/AI systems for a quarter of a century. Probably longer then that, in fact.

We're definitely closer to that than ever. But there's still a huge step between some rando vibe coding a one page web app and developers augmenting their work with AI, and someone building a complex, business rule heavy, heavy load, scalable real world system. The chronic under-appreciation of engineering and design experience continues unabated.

Anthropic, Open AI, etc? They will continue to hype their own products with outrageous claims. Because that's what gets them more VC money. Grifters gonna grift.

load more comments (3 replies)
[–] SaveTheTuaHawk@lemmy.ca 5 points 1 day ago

He's as prophetic as Elon Musk.

If he is wrong about that then he is probably wrong about nearly everything else he says. They just pull these statements out of their ass and try to make them real. The eternal problem with making something real is that reality cant be changed. The garbage they have now isn't that good and he should know that.

[–] kescusay@lemmy.world 11 points 1 day ago (6 children)

After working on a team that uses LLMs in agentic mode for almost a year, I'd say this is probably accurate.

Most of the work at this point for a big chunk of the team is trying to figure out prompts that will make it do what they want, without producing any user-facing results at all. The rest of us will use it to generate small bits of code, such as one-off scripts to accomplish a specific task - the only area where it's actually useful.

The shine wears off quickly after the fourth or fifth time it "finishes" a feature by mocking data because so many publicly facing repos it trained on have mock data in them so it thinks that's useful.

load more comments (6 replies)
load more comments
view more: ‹ prev next ›