259
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
Ignore the "containment" framing, they made a hacking bot and it seems to actually be good at finding and exploiting vulnerabilities:
Dismiss this as marketing drivel all you want but hacking is just the sort of needle in a haystack problem that AI is very good at. It requires broad knowledge, a lot of cycles trying and failing, and is easily verifiable, ie. Can you execute arbitrary scripts or not. Even if this release is BS good hacking agents are bound to come eventually and we should be discussing the implications of that instead of burying our heads in the sand, pretending AI is useless and that this is all hype.
We need AI or else we'll have nothing to protect us from... AI.
It's an arms race like any other. Cybersecurity has always been an arms race. You can't stop developing security patches, cause adversaries will continue developing new exploits.
If AI enables your adversaries to develop exploits faster than human developers can keep up with, then yeah AI will have to be a part of the solution. That doesn't mean vibe-coding security patches, but it could mean AI-driven pen-testing.
Just like quantum computing. You can call it useless and impractical all you want, but some day someone is going to use it to break conventional encryption. So it would behoove you to develop quantum capabilities now, so that you have quantum safe encryption before quantum-based exploits eventually arise, as they inevitably will...