What is the phrase? Kicking in open doors?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Bad, sure, but also a well-known fundamental issue with basically all agentic LLMs, to the point where sounding the alarm as much as this seems silly. If you give an LLM any access to your machine, it might be tricked into taking malicious actions. The same can be said to real humans. Though they are better than LLMs right now, humans are the #1 security vulnerability in any system.
LLMs can get around this by requiring explicit permission before being able to take certain actions, but the more it asks for permission, the more the user has to just sit there and approve stuff, defeating the purpose of an autonomous agent in the first place.
To be clear, it's not good, and anyone running an agent with a high level of access with no oversight in a non-sandboxed environment today is a fool. But this article is written like they found an 11/10 severity CVE in the linux kernal when really this is a well-known fundamental thing that can happen if you're an idiot and misusing LLMs to a wild degree. LLMs have plenty of legitimate issues to trash on. This isn't one of them. Well, maybe a little.
According to the article, the lack of sandboxing is intentional on Anthropic's part. Anthropic also fails to realistically communicate how to use their product.
This is Anthropic's fault.