this post was submitted on 02 Feb 2026
356 points (96.8% liked)

Technology

80478 readers
3564 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] percent@infosec.pub 6 points 2 days ago (18 children)

I wouldn't be surprised if that's only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we'll be able to run them on consumer or prosumer-grade hardware.

GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.

[–] WanderingThoughts@europe.pub 19 points 2 days ago (9 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window. Debugging something vague doesn't work. Fact checking isn't something they do well.

[–] VibeSurgeon@piefed.social 4 points 2 days ago (4 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window.

There's a remarkably effective solution for this, that helps both humans and models alike - write documentation.

It's actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?

Funnily enpugh, AI itself is a great tool to create that high quality documentation fairly efficiently, but obviously not autonomously.

Even complex systems can be documented up to a level that is easy and much less laborious for the subject experts and architects to comb through for fhe final version.

load more comments (3 replies)
load more comments (7 replies)
load more comments (15 replies)