Immediately after the big announcements about Mythos there were followups by other teams that were able to find most of the same vulnerabilities with other existing models. I think the main takeaway there was that it's just a matter of actually looking. Anthropic's advantage may have been in the framework that let them do so in industrial-scale quantity rather than the cleverness of the particular model they used.
This sort of security scan is still new and important to pay attention to, but it's not something that's unique to Anthropic or that can be kept "contained." Shades of how GPT-2 was considered "too dangerous to release" back when it first appeared. Comical in hindsight, and impossible to prevent anyway.