this post was submitted on 13 Feb 2026
70 points (100.0% liked)

Technology

81118 readers
4139 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] Deestan@lemmy.world 3 points 46 minutes ago

What is the phrase? Kicking in open doors?

[–] chisel@piefed.social 11 points 1 hour ago* (last edited 1 hour ago) (1 children)

Bad, sure, but also a well-known fundamental issue with basically all agentic LLMs, to the point where sounding the alarm as much as this seems silly. If you give an LLM any access to your machine, it might be tricked into taking malicious actions. The same can be said to real humans. Though they are better than LLMs right now, humans are the #1 security vulnerability in any system.

LLMs can get around this by requiring explicit permission before being able to take certain actions, but the more it asks for permission, the more the user has to just sit there and approve stuff, defeating the purpose of an autonomous agent in the first place.

To be clear, it's not good, and anyone running an agent with a high level of access with no oversight in a non-sandboxed environment today is a fool. But this article is written like they found an 11/10 severity CVE in the linux kernal when really this is a well-known fundamental thing that can happen if you're an idiot and misusing LLMs to a wild degree. LLMs have plenty of legitimate issues to trash on. This isn't one of them. Well, maybe a little.

[–] XLE@piefed.social 4 points 1 hour ago

According to the article, the lack of sandboxing is intentional on Anthropic's part. Anthropic also fails to realistically communicate how to use their product.

This is Anthropic's fault.