Technology

Anthropic probes reported unauthorized access to Mythos cyber model

Third-party vendor environment becomes weak link, limited release still leaks at the edges

Images

Unauthorized group has gained access to Anthropic's exclusive cyber tool Mythos, report claims | TechCrunch Unauthorized group has gained access to Anthropic's exclusive cyber tool Mythos, report claims | TechCrunch techcrunch.com

Anthropic is investigating a report that an unauthorized group gained access to “Claude Mythos Preview”, a cybersecurity tool the company released only to selected partners under its Project Glasswing program. TechCrunch, citing Bloomberg, says the access came through a third-party vendor environment and may have begun the same day the tool was announced.

Mythos is marketed as a defensive product: a model meant to scan code and open-source dependencies for vulnerabilities. But Anthropic has also warned that the same capability could be repurposed to strengthen attacks—an uncomfortable fit for a tool distributed outside the company’s own perimeter. According to Bloomberg’s account, the group did not break into Anthropic’s internal systems; it instead used a contractor’s access and guessed the model’s location based on how Anthropic has hosted other models. The group reportedly demonstrated ongoing use of Mythos via screenshots and a live demo.

The episode underlines a recurring problem in enterprise AI: “limited release” often means “available wherever a vendor can log in”. Once a model is placed in partner environments, the security boundary shifts from a single company’s controls to a chain of contractors, shared credentials, and internal access rules that were built for ordinary software—not for tools that can accelerate vulnerability discovery. Anthropic’s own statement to TechCrunch stresses that it has found no evidence its systems were impacted, but the report focuses on something else: the tool itself being reachable.

It also highlights a growing secondary market for early access. Bloomberg describes the group as operating via a private online forum and a Discord channel focused on unreleased AI models, framing their motive as experimentation rather than sabotage. That distinction matters less than the precedent: if access can be obtained by “educated guesses” and vendor-side footholds, then the main constraint becomes who knows where to look. In practice, the same distribution strategy that reassures customers—keeping Mythos away from the general public—creates a small set of high-value targets in partner networks.

Anthropic says it is still investigating what happened. The report’s most concrete claim is also the simplest: Mythos was designed to be selective, and yet someone outside that selection appears to have used it anyway.