Anthropic limits Mythos AI model to cybersecurity partners
Project Glasswing enlists Amazon Apple Microsoft and CrowdStrike to scan code for vulnerabilities, defensive access model follows Anthropic’s own recent source code exposure
Images
Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative | TechCrunch
techcrunch.com
Anthropic has begun a limited preview of a new “frontier” AI model it calls Mythos, offering access to more than 40 partner organisations under a cybersecurity programme branded Project Glasswing. According to TechCrunch, the partners include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks, and the model will be used to scan software for vulnerabilities.
Anthropic says Mythos was not trained specifically for security, but has strong “agentic” coding and reasoning capabilities that make it useful for finding bugs across both first-party code and open-source dependencies. The company claims that, over the past few weeks, Mythos identified “thousands of zero-day vulnerabilities,” many of them critical—while also noting that many of the issues were one to two decades old. That detail matters: a tool that can surface long-neglected flaws at scale is as much a commentary on maintenance incentives as it is a breakthrough in AI. If the backlog contains decades-old holes, the limiting factor has not been a lack of theoretical knowledge but a shortage of paid attention.
Project Glasswing is also a controlled distribution strategy. Anthropic says the preview will not be made generally available, a stance that reflects a widening gap between what leading labs can build and what they are willing to ship. The company itself has previously warned—via a leaked internal draft blog post, later reported elsewhere—that a more capable model could be weaponised to find and exploit vulnerabilities rather than fix them. The same features that make Mythos valuable to defenders also reduce the cost of offensive discovery for anyone who can run it.
The partner list points to the likely second-order effect: if large platforms and security vendors can run advanced models internally, they may find and remediate flaws faster than the long tail of smaller firms that depend on the same open-source components. Anthropic says partners will share learnings with the broader industry, but the immediate advantage accrues to those already sitting on the biggest codebases, the best telemetry, and the strongest incident-response teams.
The initiative lands after a run of self-inflicted security embarrassment for Anthropic. TechCrunch notes that a draft blog about the model—then called “Capybara”—was previously left in an unsecured document cache, and that the company recently exposed nearly 2,000 source code files and more than half a million lines of code via a packaging mistake involving its Claude Code software.
Mythos is being marketed as a defensive tool, but its first public footprint is a restricted preview run by the same industry giants whose software supply chains it is meant to audit.