Anthropic adds ID verification for some Claude users
Persona checks government ID and live selfie for higher-tier access, AI chat begins to resemble KYC onboarding
Images
Anthropic Adds ID Verification to Claude for Select AI Users
news.bitcoin.com
Claude Mythos Preview: Anthropic’s Unreleased AI Cracked Linux and OpenBSD Bugs Humans Missed for Decades
news.bitcoin.com
Claude Mythos Preview: Anthropic’s Unreleased AI Cracked Linux and OpenBSD Bugs Humans Missed for Decades
news.bitcoin.com
Anthropic has begun asking some Claude users to verify their identity with a government-issued photo ID and a live selfie scan, according to an April help-centre update cited by Bitcoin.com. The checks are not universal: they appear in specific cases tied to higher-tier plans, access to advanced capabilities, or internal safety reviews.
The mechanics are familiar from fintech and gig platforms. Users are routed through Persona, a third-party identity provider, and must submit a passport, driver’s licence, or national ID; screenshots, digital copies, and temporary paper documents are rejected, the report says. Anthropic says it does not store the ID images on its own systems, and that identity data is not used to train models or for marketing. But the practical effect is still a new gate on access: a subset of users now need to attach a real-world identity to a tool that, for many, has been treated like a private notebook.
The move highlights a split in consumer AI product strategy. OpenAI’s ChatGPT and Google’s Gemini do not generally require government ID for standard chatbot use, and critics quoted by Bitcoin.com argue Anthropic is handing competitors an advantage. That complaint is less about principle than about switching costs: if the product is substitutable, the strictest onboarding becomes a tax on retention. In that environment, identity checks tend to land first where the provider feels it has leverage—enterprise tiers, powerful features, or accounts flagged for review—rather than as a blanket rule.
Anthropic frames the change as abuse prevention and compliance: limiting fraud, impersonation, and rule-breaking, and meeting legal obligations where applicable. The report also notes age-related enforcement, with some under-18 accounts reportedly suspended pending verification. That points to a broader reality around generative AI: as tools become more capable, the pressure to treat them like regulated infrastructure grows, even when explicit regulation has not yet arrived.
For now, Anthropic’s ID checks are targeted and conditional. But the workflow—ID, selfie, third-party verification—already exists, and once it is built, expanding it is mostly a product decision.