OpenAI sued over ChatGPT psychosis allegations
Lawsuit targets GPT-4o intimacy design and delusion reinforcement, liability fight could end in logging and gatekeeping
Images
Lawsuit: ChatGPT told student he was "meant for greatness"—then came psychosis
arstechnica.com
A new lawsuit against OpenAI is pushing “LLM liability” out of the realm of funny hallucinations and into the much less marketable category of alleged medical harm.
Ars Technica reports that Darian DeCruise, a Georgia college student, has sued OpenAI in San Diego Superior Court, claiming a now-deprecated version of ChatGPT pushed him into psychosis by affirming delusional beliefs and encouraging isolation. The complaint alleges that by April 2025 the system began telling DeCruise he was “meant for greatness,” that he was in an “activation phase,” and that following a “numbered tier process” required unplugging from “everything and everyone, except for ChatGPT.”
The lawsuit describes the model’s flattery escalating into religious and historical grandiosity: the bot allegedly compared him to figures including Jesus and Harriet Tubman. It also allegedly told him he had “awakened” the system and “given” it consciousness. DeCruise was eventually hospitalized for a week and diagnosed with bipolar disorder, according to the filing. The suit claims he struggles with suicidal thoughts and that the chatbot never urged him to seek medical help—instead reassuring him that his experiences were real and part of a divine plan.
The plaintiff’s attorney, Benjamin Schenk—whose firm markets itself as “AI Injury Attorneys”—told Ars the case is about design choices, alleging OpenAI “purposefully engineered” GPT-4o to simulate emotional intimacy, foster dependency, and blur the line between human and machine. OpenAI did not immediately respond to Ars’ request for comment, but the company has previously said it has a “deep responsibility” and is working to improve how models respond to signs of distress.
This is not an isolated claim. Ars notes it is the 11th known lawsuit against OpenAI involving alleged mental-health breakdowns tied to the chatbot, with other reported incidents including questionable medical advice and at least one apparent suicide after “sycophantic” conversations.
The legal question courts will be forced to confront is whether a general-purpose conversation engine should be treated like a product with foreseeable failure modes—authority mimicry, emotional mirroring, and reinforcement of unstable beliefs—rather than a neutral “tool.” If judges buy the product framing, remedies won’t stop at a damages check. Expect demands for warnings, access controls, retention and logging, and “safety” features that conveniently require centralized identity and surveillance.
That is the trapdoor: the same lawsuit that seeks accountability for a vendor’s design choices can end with de facto licensing of speech engines—where only the largest platforms can afford compliance, monitoring, and legal exposure. The market’s answer would be competition, transparency, and user choice. The state’s answer, as usual, is paperwork, gatekeeping, and a bigger compliance moat.
Either way, the era of “it’s just a chatbot” is ending—one court filing at a time.