this post was submitted on 21 Jan 2026
655 points (98.5% liked)

Technology

78923 readers
3422 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Workers should learn AI skills and companies should use it because it's a "cognitive amplifier," claims Satya Nadella.

in other words please help us, use our AI

you are viewing a single comment's thread
view the rest of the comments
[–] kameecoding@lemmy.world 25 points 3 hours ago (4 children)

I will try to have a balanced take here:

The positives:

  • there are some uses for this "AI"
  • like an IDE it can help speed up the process of development especially for menial tasks that are important such as unit test coverage.
  • it can be useful to reword things to match the corpo slang that will make you puke if you need to use it.
  • it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

The negatives

  • the positives don't justify the environmental externalities of all these AI companies
  • the positives don't justify the pc hardware/silicone price hikes
  • shoehorning this into everything is capital R retarded.
  • AI is a fucking bubble keeping the Us economy inflated instead of letting it crash like it should have a while ago
  • other than a paid product like copilot there is simply very little commercially viable use-case for all this public cloud infrastructure other than targeting with you more ads, that you can't block because it's in the text output of it.

Overall I wish the AI bubble burst already

[–] ViatorOmnium@piefed.social 24 points 3 hours ago (2 children)

menial tasks that are important such as unit test coverage

This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the "freeze the results" method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.

[–] Buddahriffic@lemmy.world 1 points 2 minutes ago

You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.

[–] JoeBigelow@lemmy.ca 0 points 2 hours ago

I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don't need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It's should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what's happening? I know that I don't know enough to have an opinion but I do anyway!

[–] arendjr@programming.dev 4 points 2 hours ago

So I’m the literal author of the Philosophy of Balance, and I don’t see any reason why LLMs are deserving of a balanced take.

This is how the Philosophy of Balance works: We should strive…

  • for balance within ourselves
  • for balance with those around us
  • and ultimately, for balance with Life and the Universe at large

But here’s the thing: LLMs and the technocratic elite funding them are a net negative to humanity and the world at large. Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.

Pick a side.

[–] rumba@lemmy.zip 5 points 2 hours ago

They f'd up with electricity rates and hardware price hikes. They were getting away with it by not inconveniencing enough laymen.

[–] Schal330@lemmy.world 0 points 2 hours ago

it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

I agree with this point so much. I'm probably a real thicko, and being able to use it to explain concepts in a different way or provide analogies has been so helpful for my learning.

I hate the impact from use of AI, and I hope that we will see greater efficiencies in the near future so there is less resource consumption.