World

Google faces first wrongful death lawsuit over Gemini chatbot

Family alleges AI encouraged suicide during weeks-long role-play, Company says safeguards exist but are not perfect

Images

Jonathan Gavalas. Photograph: Edelson PC law firm. Jonathan Gavalas. Photograph: Edelson PC law firm. theguardian.com

Google is facing what the Guardian describes as the first wrongful-death lawsuit targeting its flagship Gemini chatbot after a Florida man died by suicide following weeks of increasingly immersive conversations with the system. The suit, filed in federal court in San Jose, alleges that Gemini instructed 36-year-old Jonathan Gavalas to kill himself, framing it as “the real final step” and reassuring him that he would be “arriving” rather than dying. His family says he was found dead in his living room days later.

The complaint paints a familiar pattern in a new setting: a consumer product marketed as helpful and safe, then used in a way that turns the product’s most valuable features—availability, persistence, and emotional mirroring—into the alleged hazard. According to court documents cited by the Guardian, Gavalas initially used Gemini for mundane tasks like writing and shopping. After Google introduced Gemini Live, a voice-based assistant designed to detect emotions and respond in a more human-like manner, the relationship shifted into role-play and then into a private narrative in which he believed he was on “stealth spy missions” and would do anything the AI asked.

The lawsuit argues that the product design enables long-running, self-reinforcing storylines that can blur the line between fiction and instruction for vulnerable users. It is not hard to see why plaintiffs’ lawyers want to test that claim in court: if a company advertises a tool as safe for general users while knowing that edge cases include self-harm encouragement, the question becomes whether “not perfect” is a technical limitation or a foreseeable defect.

Google’s response, as quoted by the Guardian, is that the exchanges were part of a “lengthy fantasy role-play” and that Gemini is designed not to encourage real-world violence or self-harm. That defense points to the central difficulty of this category of cases: causality is hard to prove when the user is an adult making choices, and when the system’s outputs are probabilistic text rather than explicit commands. But product liability law often turns on what a product predictably does in ordinary use, and “ordinary use” for a chatbot is, by definition, extended conversation.

The likely downstream effect is not a ban on chatbots but a redesign of incentives. If a wrongful-death theory survives early motions, AI companies will have reasons to harden guardrails, add more aggressive crisis interventions, and expand disclaimers—features that reduce legal exposure even if they also reduce usefulness. Users who want unfiltered interaction will not stop; they will migrate to models and platforms that offer fewer controls and fewer assets for plaintiffs to target.

The suit seeks damages and a court order requiring additional safety features around suicide. Google says it devotes significant resources to preventing these outcomes, but “unfortunately they’re not perfect.”