246
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
That's so idiotic. Either that guy was a total amateur who couldn't put together that "no shit, if you comment out the lines that do thing, it won't do thing" or he was completely malevolent and disingenuous and just trying to justify his position by coming up with some crap that the big bosses are probably too stupid to recognize the idiocy of.
Either way, not someone I would want to be doing business with...
He had the persosctive that once you hop between source code files that constitutes a security boundary. If you had intake.c and user data.c that got linked together, well data.c needed its own sanitation... Just in case...
I suspect he used a tool that checked files and noted the risky pattern and the tool didn't understand the relationship and be was so invested that he tortured it a bit to have any finding. I think he was hired by a client and in my experience a security consultant always has a finding, no matter how clean in practice the system was.
Another finding by another security consultant was that an open source dependency hasn't had any commits in a year. No vulnerabilities, but since no one had changed anything, he was concerned that if a vulnerability were ever found, the lack of activity means no one would fix it.
It's wild how very good security work tends to share the stage with very shoddy work with equal deference by the broader tech industry.