246
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
I would love to see the exploit. There are vulnerabilities discovered everyday that amount to very little in terms of use in real world implementations.
Yes, recently we got a security "finding" from a security researcher.
His vulnerability required first for someone to remove or comment out calls to sanitize data and then said we had a vulnerability due to lack of sanitation....
Throughout my career, most security findings are like this, useless or even a bit deceitful. Some are really important, but most are garbage.
It may not be completely crazy, depending on context. With something like a web app, if data is being sanitized in the client-side Javascript, someone malicious could absolutely comment that out (or otherwise bypass it).
With that said, many consultant-types are either pretty clueless, or seem to feel like they need to come up with something no matter how ridiculous to justify the large sums of money they charged.
In this case, there was file a, which is the backend file responsible for intake and sanitation. Depending on what's next, it might go on to file b or file c. He modified file a.
His rationale was that every single backend file should do sanitation, because at some future point someone might make a different project and take file b and pair it with some other intake code that didn't sanitize.
I know all about client side being useless for meaningful security enforcement.