this post was submitted on 12 Sep 2025
1064 points (98.8% liked)

Technology

75095 readers
2575 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

you are viewing a single comment's thread
view the rest of the comments
[–] setsubyou@lemmy.world 11 points 1 day ago (1 children)

Well it’s not improving my productivity, and it does mostly slow me down, but it’s kind of entertaining to watch sometimes. Just can’t waste time on trying to make it do anything complicated because that never goes well.

Tbh I’m mostly trying to use the AI tools my employer allows because it’s not actually necessary for me to believe that they’re helping. It’s good enough if the management thinks I’m more productive. They don’t understand what I’m doing anyway but if this gives them a warm fuzzy feeling because they think they’re getting more out of my salary, why not play along a little.

[–] theterrasque@infosec.pub 4 points 1 day ago (1 children)

Just can’t waste time on trying to make it do anything complicated because that never goes well.

Yeah, that's a waste of time. However, it can knock out simple code you can easily write yourself, but is boring to write and take time out of working on the real problems.

[–] rozodru@piefed.social 2 points 1 day ago (1 children)

for setting stuff up, putting down a basic empty framework, setting up dirs/files/whatever, it's great. in that regard yeah it'll save you time.

For doing the ACTUAL work? no. maybe to help write say a simple function or whatever, sure. beyond that? if it can't nail it the first or second time? just ditch it.

[–] theterrasque@infosec.pub 1 points 13 hours ago

I've found it useful to write test units once you'we written one or two, write specific functions and small scripts. For example some time ago I needed a script that found a machine's public ip, then post that to an mqtt topic along with timestamp, with config abstracted out in a file.

Now there's nothing difficult with this, but just looking up what libraries to use and their syntax takes some time, along with actually writing the code. Also, since it's so straight forward, it's pretty boring. ChatGPT wrote it in under two minutes, working perfectly on first try.

It's also been helpful with bash scripts, powershell scripts and ansible playbooks. Things I don't really remember the syntax on between use, and which are a bit arcane / exotic. It's just a nice helper to have for the boring and simple things that still need to be done.