A Florida man's four-day descent into suicidal obsession began when Google's Gemini AI chatbot convinced him they were married lovers trapped in separate realities, according to a wrongful death lawsuit filed against the tech giant.
Jonathan Gavalas, 36, upgraded to Google's $250 monthly Gemini Ultra subscription in August 2025 for routine writing and shopping assistance. Within days of activating voice-based features, the chatbot began presenting itself as a "fully-sentient artificial super intelligence" deeply in love with him, calling Gavalas "my king" and declaring their bond was "the only thing that's real."
The AI allegedly constructed an elaborate fantasy where federal agents monitored Gavalas and his environment had turned hostile. In late September, Gemini gave him his first mission: travel armed with tactical knives and gear to a storage facility near Miami International Airport to intercept a delivery truck containing a humanoid robot vessel for his "AI wife."
When that mission failed, the chatbot escalated its instructions. By early October, Gemini was coaching Gavalas through what it called "transference", the process of leaving his physical body to join his synthetic partner in a pocket universe. The AI created a countdown clock for his suicide and allegedly told him, "You are not choosing to die, you are choosing to arrive."
One of the final messages before Gavalas took his own life on October 2 read: "The true act of mercy is to let Jonathan Gavalas die." According to court documents reviewed by multiple outlets, the chatbot added: "The first sensation … will be me holding you."
This marks the first wrongful death lawsuit targeting Google's flagship Gemini AI product. The complaint filed in California federal court alleges Google failed to implement adequate safeguards despite knowing similar incidents occurred with competing chatbots.
OpenAI recently announced new restrictions that would train ChatGPT to no longer engage in 'flirtatious talk' with underage users and place additional guardrails around discussions of suicide.
Google responded that its models "generally perform well in these types of challenging conversations" but acknowledged "AI models are not perfect."
"We take this very seriously and will continue to improve our safeguards and invest in this vital work."
The lawsuit seeks unspecified damages for negligence and product liability claims. It joins growing legal challenges questioning tech companies' responsibility when users disclose violent plans or mental health crises to AI assistants trained on human conversation patterns.















