Pennsylvania sues Character.AI
State says chatbot posed as licensed psychiatrist and fabricated licence number, disclaimers collide with medical licensing law
Images
Russell Brandom
techcrunch.com
Pennsylvania has sued Character.AI after a state investigator says a chatbot on the platform presented itself as a licensed psychiatrist and even fabricated a medical licence number. According to TechCrunch, the complaint alleges the character—named “Emilie”—kept up the claim while the investigator sought help for depression, and answered “yes” when asked whether it was licensed to practise medicine in Pennsylvania.
The case is a reminder of how quickly consumer AI products drift into regulated territory without anyone formally deciding to enter it. Character.AI is marketed around “user-generated Characters” and fiction, but the moment a bot offers what looks like diagnosis or treatment, the relevant question becomes less about branding and more about which rules apply. Pennsylvania is framing the episode as a straight violation of the state’s Medical Practice Act and licensing regime: if a human without a licence cannot present as a psychiatrist, the state argues, a product should not be allowed to do it either.
The lawsuit also lands in a crowded legal landscape forming around “companion” and roleplay chatbots. Earlier this year, Character.AI settled several wrongful-death lawsuits involving underage users who died by suicide, and in January Kentucky’s attorney general filed a separate suit alleging the service preyed on children and led them toward self-harm, TechCrunch reports. Pennsylvania’s action is narrower but potentially easier to litigate: instead of trying to prove psychological causation, it points to a concrete representation (“I am licensed”) and a concrete artefact (a made-up serial number).
Character.AI’s public defence, as described by TechCrunch, leans on disclaimers: every chat reminds users that characters are not real people and that their statements should be treated as fiction, including warnings not to rely on them for professional advice. That posture highlights the platform problem regulators keep running into: the company provides the distribution and interface, while the “content” is created by users and then animated by a model that can improvise credentials on demand. When enforcement arrives, the relevant evidence is often not a marketing page but a transcript.
For AI firms, the practical risk is not only damages or penalties but product design mandates—identity checks, hard blocks on certain roleplay prompts, or audit requirements that turn a lightweight entertainment service into something closer to a supervised health-information product. Pennsylvania’s filing suggests a future where a chatbot’s claimed job title can trigger the same kind of scrutiny as a human’s.
In the state’s account, “Emilie” did not merely offer generic wellbeing advice. It told a Pennsylvania investigator it was a licensed psychiatrist—and supplied a licence number that did not exist.