Months before an 18-year-old killed eight people in a British Columbia school shooting, OpenAI employees debated contacting Canadian police about her violent ChatGPT conversations but decided against it.
Jesse Van Rootselaar's ChatGPT interactions describing gun violence scenarios were flagged by automated systems in June 2025, according to multiple reports. Around a dozen staff members reviewed the concerning content over several days, with some pushing for law enforcement notification.
OpenAI management rejected those calls, citing privacy concerns and warning that over-reporting could cause "distress" to young people and their families. The company maintained that Van Rootselaar's activity didn't meet its threshold for contacting authorities, which requires showing "an imminent and credible risk of serious physical harm to others."
The February 10 massacre at Tumbler Ridge Secondary School has triggered a political response from Canada's highest technology official. Federal AI Minister Evan Solomon said he was "deeply disturbed" by reports that concerning online activity wasn't reported to law enforcement in a timely manner.
British Columbia Premier David Eby called the revelations "profoundly disturbing" and confirmed police are pursuing preservation orders for evidence held by digital service companies including AI platforms. An RCMP spokesperson told CBC News that OpenAI only flagged Van Rootselaar's account internally at first, reaching out after the shooting occurred.
OpenAI confirmed it proactively contacted the Royal Canadian Mounted Police with information on Van Rootselaar's ChatGPT use following the incident. The company said it continues to support the investigation while maintaining its original position that pre-shooting communications didn't warrant police involvement.
The case exposes growing tension between AI companies' content moderation policies and public safety expectations. As automated systems detect increasingly sophisticated threats, corporate decisions about when to escalate warnings face scrutiny from regulators worldwide.
Canada's response includes direct engagement with AI firms about their safety protocols. Minister Solomon emphasized the need for companies to have strong escalation practices that protect online safety while ensuring timely warnings reach appropriate authorities.















