Media

Mercor links security incident to LiteLLM supply-chain compromise

Open-source AI plumbing becomes a single point of failure for hiring and contractor payouts, compliance badges arrive after the dependency graph already shipped

Images

Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch techcrunch.com

Mercor says it was caught up in a supply-chain hack tied to an open-source AI library, a reminder that “AI automation” increasingly depends on third-party code no one at the buyer ever audits. The recruiting startup told TechCrunch it was “one of thousands of companies” affected by a compromise of LiteLLM, a widely used tool for routing requests to large language models.

According to TechCrunch, the LiteLLM incident involved malicious code inserted into a package associated with the project, which was discovered and removed within hours. But the short window mattered because the library is downloaded millions of times per day, per security firm Snyk, and sits in the plumbing layer that many companies treat as interchangeable middleware. Mercor, founded in 2023, pitches itself as a way for companies including OpenAI and Anthropic to hire and pay specialized contractors at scale; it says it facilitates more than $2 million in daily payouts and was valued at $10 billion after a 2025 funding round.

The immediate question for customers is not whether a single vendor had “good security hygiene,” but how many vendors are now embedded in basic business processes that used to be internal and boring. Recruiting, HR ticketing, contractor onboarding, and internal chat are increasingly wired into AI tooling that pulls in dependencies from public repositories. When a library like LiteLLM becomes a default component, a compromise upstream can turn into downstream access to Slack workspaces, internal tickets, and workflow logs—data that is valuable even when it is not “core IP.”

The incentives are awkward. AI vendors sell labor substitution and risk reduction—fewer humans reading sensitive material, fewer manual steps, fewer mistakes. But the operational reality is a stack of fast-moving packages, frequent releases, and permissive defaults, where a single compromised dependency can become the easiest way into a company that otherwise spends heavily on security. In that environment, “compliance” can become a branding exercise: TechCrunch notes that LiteLLM responded by changing its compliance processes, including switching from Delve to Vanta for certifications.

There is also a liability gap. Mercor said it moved quickly to contain and remediate the incident and is investigating with third-party forensics experts, but declined to say whether the event was connected to an extortion group’s claim that it had obtained Mercor data, or whether customer and contractor data was accessed or exfiltrated. That leaves counterparties to make decisions with partial information: whether to pause integrations, rotate credentials, notify contractors, or treat the episode as a contained scare.

The attack surface expands fastest where the business case for automation is easiest to sell: high-volume processes with lots of personal data and many external participants. A recruiting platform that handles identity documents, payment details, work histories, and private messages is not a side system; it is a concentration point.

LiteLLM’s malicious code was removed within hours. The downstream question is how many companies will discover they were exposed only after someone else proves it.