Media

Anthropic Claude suffers widespread outage

Login failures hit Claude.ai and Claude Code while API stays up, Dependence on black-box AI turns downtime into someone else’s cost

Images

Image Credits:Anthropic Image Credits:Anthropic techcrunch.com
techcrunch.com
Image credits: TechCrunch Image credits: TechCrunch techcrunch.com
A woman looks at a mobile phone displaying the logo of Google in front of a laptop screen displaying the logo of Google. A woman looks at a mobile phone displaying the logo of Google in front of a laptop screen displaying the logo of Google. techcrunch.com
techcrunch.com

Thousands of users reported on Monday morning that they could not access Anthropic’s Claude, with failures concentrated around login and logout rather than the underlying API, according to TechCrunch and the company’s own status page. The disruption hit both Claude.ai and Claude Code, while Anthropic said the Claude API was “working as intended” and that a fix was being rolled out. The outage landed days after Claude climbed to the top of Apple’s App Store charts, briefly overtaking ChatGPT amid heightened attention around the company’s tense relationship with the US government.

On its face, an AI chatbot being down for a few hours is a routine incident in cloud software. The attention it draws is a sign of what Claude has become: not a novelty app but a dependency embedded in daily work—drafting customer emails, summarising documents, generating code, and acting as an always-on second opinion. When the login layer fails, the “AI colleague” disappears instantly, and the cost is borne far from Anthropic’s balance sheet: support queues lengthen, internal decisions stall, and teams revert to slower manual workflows.

The asymmetry is structural. Many organisations treat frontier-model access as a utility while buying it like a consumer subscription, without the kind of contractual guarantees that come with traditional enterprise infrastructure. For a growing share of users, Claude is a single vendor choke point: a black-box service that can be rate-limited, changed, or interrupted without warning, and whose failure modes are not easily diagnosable from the outside. Even when the API stays up, a web or identity outage can still disable most real-world usage, because the tools people actually touch—chat interfaces, IDE integrations and managed accounts—sit on top.

The timing also underlines how quickly AI availability is becoming entangled with politics. TechCrunch notes that President Donald Trump last week told federal agencies to stop using Anthropic products after a dispute over safeguards that, in Anthropic’s telling, were meant to prevent use of its models for mass domestic surveillance or fully autonomous weapons. Defense Secretary Pete Hegseth said he would designate the company a supply-chain threat, while Anthropic said it had not received formal notices. In that environment, reliability incidents are no longer just engineering problems; they become inputs into procurement debates, risk committees, and vendor blacklists.

On Monday, the immediate problem was simple: users could not log in. The larger fact is that a login outage at a private AI company now registers as a workday event for people who do not work there.