Canadian officials summoned OpenAI executives to Ottawa this week after discovering the company had flagged a mass shooter's disturbing ChatGPT conversations about gun violence months before she killed eight people but chose not to alert police.
The 18-year-old shooter, Jesse Van Rootselaar, discussed violent scenarios involving firearms with OpenAI's chatbot last June, triggering internal alarms that led to her account suspension. According to The Wall Street Journal, employees raised concerns about her messages, but company leadership determined they didn't meet the threshold for reporting to authorities.
Van Rootselaar went on to kill her mother and half-brother at their home earlier this month before driving to Tumbler Ridge Secondary School in British Columbia, where she killed five children and one educator. Two other students were injured in the February 10 attack, with one remaining in serious condition at a Vancouver hospital.
The shooter took her own life as police arrived at the scene.
"From the outside, It looks like OpenAI had the opportunity to prevent this tragedy," British Columbia Premier David Eby said Monday. "I'm angry about that."
Eby demanded answers from the company about why it didn't share information that could have potentially prevented what he called "horrific loss of life."
Federal Artificial Intelligence Minister Evan Solomon scheduled an emergency meeting with OpenAI safety officials for Tuesday in Ottawa. "We will ask them questions about safety protocols, escalation thresholds and trust," Solomon told reporters.
He described himself as "deeply disturbed" by what he learned about OpenAI's handling of the case.
OpenAI confirmed it suspended Van Rootselaar's account after its abuse detection system identified concerning messages last summer. The company said it balances public safety against user privacy and tries to avoid causing distress by having law enforcement show up unannounced at users' homes.
In this instance, OpenAI determined there was no credible or imminent planning of serious physical harm.
According to The New York Times, Van Rootselaar displayed a fascination with weapons and extreme violence on social media and documented mental health struggles. Her use of ChatGPT before the shooting was first reported by The Wall Street Journal.
The Royal Canadian Mounted Police is seeking court orders to force digital platforms and AI companies to preserve potential evidence related to the Tumbler Ridge case. OpenAI contacted police after learning about the shooting but didn't alert authorities beforehand despite having scheduled a February 11 meeting with British Columbia officials about opening a provincial office.
"This was a devastating tragedy, and we are doing all we can to support the ongoing investigation," OpenAI said in a statement.
The company confirmed its executives would travel to Ottawa to cooperate with government inquiries.
Federal Conservative House leader Andrew Scheer called for thorough investigation into whether "potential alarm bells" were ignored. The incident has sparked debate in Canada about regulating global technology companies and balancing online monitoring with personal privacy protections.















