this post was submitted on 20 Apr 2026
221 points (99.1% liked)

Technology

83929 readers
2482 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 20 comments
sorted by: hot top controversial new old
[–] tburkhol@lemmy.world 25 points 2 hours ago (1 children)

I feel like the big mistake they continue to propagate is failing to distinguish among the uses of AI.

A lot of hype seems to be the generative uses, where AI creates code, images, text, or whatever, or the agentic uses where it supposedly automates some process. Safe uses in that way should involve human review and approval, and if the human spends as much time reviewing as they would creating it in the first place, then there's a productivity loss.

All the positive cases I've heard of use AI like a fancy search engine - look for specific issues in a large code base, look for internal consistency in large document or document sets. That form lets the human shift from reading hundreds or thousands of pages to reading whatever snippets the AI returns. Even if that's a lot of false positives, it's still a big savings over full review. And as long as the AI's false-negative rate is better than the human, it's a net improvement in review.

And, of course, there's the possibility that AI facilitated review allows companies to do review of documents that they would otherwise have ignored as intractable, which would also show up as reduced productivity.

[–] IphtashuFitz@lemmy.world 6 points 48 minutes ago (1 children)

I have a “prosumer” internet setup at home for various reasons. It’s UniFi gear, which is highly configurable, and configs are centrally managed. They provide a pretty robust web UI to manage it all, but the configuration all resides in plain text files that you can also hand edit if you want to do anything really advanced.

While troubleshooting an issue recently I came across a post on their support forum from somebody who had used Claude to analyze those config files and make recommendations. Since I have access to Claude through my employer I decided to give that a try. I was pleasantly surprised with the recommendations it made after it spent a few minutes analyzing my configuration.

[–] tburkhol@lemmy.world 4 points 21 minutes ago

To me, that's the 'fancy search engine' mode of AI where it works well and basically focuses the human effort. A needle-in-haystack problem. It might still be missing things, but they're things you've already missed yourself, so no loss.

It's different from asking Claude, for example, to create a new guest VLAN with limited internet access and access to only a specific service on the private network. For that, you have to 1) trust Claude because you lack the expertise to review, 2) spend time learning the config system well enough to review, or 3) already know the system well enough to check it. 1) just sounds bad. 2) sounds like Claude isn't saving much time, but maybe helps focus the human where to study, and 3) seems like the human might have been able to just do the job in similar or less time than writing the prompt + reviewing the result.

[–] deadbeef79000@lemmy.nz 19 points 3 hours ago

Thousands of CEO's just realized they're the prime candidate for replacement by an LLM.

[–] Einhornyordle@feddit.org 35 points 3 hours ago* (last edited 3 hours ago) (1 children)

No impact? Nothing? I mean, they shoud at least notice something, right?

A study published in February by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted.

Well duh, that explains everything. Me getting paid for taking a dump 1.5h a week hasn't had any impact on my productivity score either. My guess is those 1.5h were mostly used to ask questions you'd otherwise just look up yourself, which also doesn't change much in terms of productivity.

[–] WhatAmLemmy@lemmy.world 15 points 2 hours ago

Companies are built on deterministic, predictable processes and workflows. A stochastic tool which randomly hallucinates correlations as fact, absent of critical thought, introduces a huge amount of risk/uncertainty; especially regarding data security.

It's not surprising most corporations aren't seeing a productivity boost, because the product, tooling, and ecosystem are simply not at a level of maturity where they can be trusted with any core or critical tasks. When you add in the potential for significant future price increases, and other unknown impacts outside your control, choosing to voluntarily make your business dependent on some 3rd parties ever changing product sounds completely insane.

[–] theOneTrueSpoon@feddit.uk 37 points 4 hours ago (1 children)
[–] Honytawk@discuss.tchncs.de 44 points 4 hours ago* (last edited 4 hours ago) (1 children)

I am shocked that thousands of CEOs dare to admit they were wrong.

[–] theOneTrueSpoon@feddit.uk 8 points 3 hours ago
[–] TallonMetroid@lemmy.world 17 points 3 hours ago

However, firms’ expectations of AI’s workplace and economic impact remained substantial: Executives also forecast AI will increase productivity by 1.4% and increase output by 0.8% over the next three years. While firms expected a 0.7% cut to employment over this time period, individual employees surveyed saw a 0.5% increase in employment.

But they'll continue to shove it down the wage-slaves' throats.

[–] FriendOfDeSoto@startrek.website 7 points 3 hours ago (1 children)

Before we gloat too much - and let's be honest, we all wanna - CEOs tend to be of a certain vintage. I remember how I could program the VCR and my parents decidedly could not. Old folks' opinions may be less relevant here.

Bringing all these tools in is basically giving magic beans to cave people. How would they know how to use them effectively? All the while trying to figure out if they are indeed magic. This, sadly, could just be the anomaly before the numbers go up. This isn't proof positive that it's all horse shit just yet. It's just confirmation that the peddlers are overflowing with it.

[–] fodor@lemmy.zip 10 points 3 hours ago

I think your conclusion is too generous. Obviously many things can happen in the future as technology evolves, but we need to consider what people have promised us and what they delivered. That's the definition of integrity. Many of these CEOs and of course this magazine lack integrity.

And I think you can be even more blunt. You can call out the companies that are riding the bubble as long as they can in hopes that they won't be replaceable when the bubble bursts. If they can embed themselves with national governments or as pieces of other mega corporations, then they will survive even if they shouldn't.

And some companies are run by people who have gotten rich already yet know that their companies will never be able to deliver on the promises that they've made. Because the point was for the individuals to get rich, not to sell something economically viable.

[–] Buffalox@lemmy.world 2 points 2 hours ago

So the bad job numbers have nothing to do with AI anyways?
Hands up if you are surprised by this...

Why am i not seeing any hands???

[–] fodor@lemmy.zip 6 points 3 hours ago

I love how Fortune is presenting this as like maybe the CEOs had been deceptive. They are admitting it which means they pretended otherwise for a while and now they finally have to tell the truth... Except Fortune was selling the same line, right there together with the CEOs. Has Fortune apologized for its own part propping up this bubble, when everyone knew it was largely nonsense?

[–] webp@mander.xyz 1 points 4 hours ago
[–] jordanlund@lemmy.world 0 points 3 hours ago

Among CEOs? Probably true. Workers are being outsourced in droves though.

[–] chaosCruiser@futurology.today -2 points 4 hours ago (1 children)

Looks like AI is finally diving into the trough of disillusionment.

[–] Eyekaytee@aussie.zone 1 points 3 hours ago (1 children)

I don't think it's the 'employee destroying, we're going to be playing in our gardens while AI robots do all our work for us' level that AI CEO's have been proclaiming but it is very useful

I would put it top 4 after Google (1999), the internet and smartphones in terms of usefulness, I'm using it every day

[–] NekoKoneko@lemmy.world 4 points 2 hours ago (1 children)

I'm using it every day

To do what?

[–] Eyekaytee@aussie.zone 1 points 1 hour ago* (last edited 45 minutes ago)

An example is yesterday I used Claude a ton for updating an old static site archive, update the caddyfile, set caching, remove old bits of code, update the css on hundreds of html pages, made it so easy

Any time I want to know if a linux command exists to manipulate data, particularly with media conversion and anything involving regex/sed

analysing log files, explaining concepts, I've used it to build a massive python script for automating work tasks and more scripts to give me better insight into our monitoring

I've use Mistral to generate a wallpaper image of a forest then used https://upscayl.org/ to make it massive so it looks amazing on my 34" ultrawide

I've also used it to make my own selfhosted image upload site

Loads and loads and loads of discussions on health, vitamins, strength training routines etcetc

The other day I had a carbonated drink and it upset my gut which is typical (I have IBS) and it found an alternative local soft drink maker who has low sugar drinks which use monk fruit extract instead of the other artificial flavours, went on a 1.5 hour drive west and got myself some:

https://aussie.zone/post/31756118

Those images hosted on the server and code built with claude/mistral (with supabase as the backend)

And yeah it's WAY better for me, doubt I would have ever found it since I've been looking for an IBS friendly soft drink for years

I use LM Studio with different models (Qwen/Gemma/GLM/Mistral) for basic language learning and translations and tier 1 learning javascript

And so much more, claude especially in the last 6 months has kicked it up a gear while Mistral sadly does appear to be falling behind, I'm hoping they catch up soon

edit: I like how 2 people have downvoted me just for explaining what I use ai for, never change lemmy :)