this post was submitted on 11 Mar 2026
97 points (96.2% liked)
Technology
82488 readers
4418 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I followed AI developments in the beginning, but it felt like really effective use cases were always just out of reach.
Last time I was using AI was before "Agentic" AI was a thing (it was just around the corner).
Can anyone clue me in, is AI still making forward progress? I feel like if there was a massive change or breakthrough it would be HUGE news, but I also imagine slow incremental progress could eventually build up to being a breakthrough.
I understand that it is still way too prone to errors and hallucinations to be trusted with serious tasks, but have there been any noteworthy improvements?
LLM-based coding agents have become useful to the point that people are building large software projects without humans writing or reviewing code directly. The naive approach to that will result in disaster if used in a production environment, but practices to improve reliability are evolving.
Popular opinion seems to be that Claude Opus 4.5 was the tipping point for this.