Vercel breach traces to Context AI OAuth integration
Employee-linked app connection exposes customer credentials and code, supply-chain security becomes identity hygiene
Images
Zack Whittaker
techcrunch.com
Anthropic's Mythos AI model sparks fears of turbocharged hacking
arstechnica.com
A breach at web-hosting firm Vercel began with an employee connecting a Context AI app to a corporate Google account, giving attackers a foothold that Vercel says led to customer data theft.
According to TechCrunch, the attackers used an OAuth connection created by the downloaded Context AI app to take over the employee’s Google account and then access parts of Vercel’s internal systems, including credentials that were not encrypted. Vercel said its widely used open-source projects Next.js and Turbopack were not affected, and it has contacted customers whose app data and keys were compromised. CEO Guillermo Rauch urged customers to rotate keys and credentials, including those marked “non-sensitive”.
The episode is another reminder that “security incident” often means “identity incident”. OAuth is designed to let apps act on a user’s behalf; once a token is stolen, the attacker does not need to break passwords or bypass multifactor prompts—they inherit whatever permissions the user already has. In this case, one employee’s decision to connect a third-party tool to a corporate account appears to have created a bridge into production-adjacent systems that hold customer secrets.
Vercel’s warning that the compromise may affect “hundreds of users across many organizations” points to a familiar pattern in modern software: a small vendor provides convenience, a larger platform integrates it, and downstream customers discover their exposure only after credentials circulate on criminal forums. The seller of the Vercel data claimed affiliation with ShinyHunters, a name associated with cloud and database breaches, though ShinyHunters denied involvement to BleepingComputer, TechCrunch reports.
Context AI, which builds evaluation and analytics tools for AI models, separately confirmed a March breach involving its “Context AI Office Suite” consumer app and said attackers likely compromised OAuth tokens for some consumer users. The company said it notified one customer at the time but now believes the incident was broader—an admission that highlights how partial disclosure at the vendor layer can delay defensive action by everyone else.
The timing also collides with a second, more structural shift: AI labs are increasingly marketing models for cyber use. Ars Technica, citing the Financial Times, reports that Anthropic’s new “Mythos” model can find software flaws faster than humans and can also generate exploits, prompting concern among governments and companies that defensive patching will not keep up. CrowdStrike data cited in the report shows AI-enabled cyber attacks up 89% in 2025 and “breakout time” falling to 29 minutes.
Taken together, the Vercel incident and the push toward cyber-capable models describe the same bottleneck: organisations are adding integrations and automation faster than they can audit permissions, rotate secrets, and restrict what a single compromised identity can reach. The more software becomes a chain of delegated access, the more a breach looks like paperwork—tokens, scopes, keys—until it turns into a customer-facing outage.
Vercel says it is still investigating what was accessed and how many customers are affected.