OpenAI Flagged Shooter—No Police Call

When an AI system flags “furtherance of violent activities” but decides the threat isn’t “imminent” enough to tell police, the public is left asking who’s really responsible for preventing the next tragedy.

Quick Take

  • OpenAI detected abusive activity linked to Jesse Van Rootselaar in June 2025 and weighed notifying Canada’s RCMP, but decided the risk did not meet its threshold for a referral.
  • The account was banned for policy violations, yet Van Rootselaar later carried out a deadly February 10, 2026, rampage in Tumbler Ridge, British Columbia, killing eight people before dying by suicide.
  • OpenAI said it provided account details to the RCMP after the shooting, as investigators worked to determine motive and reconstruct events.
  • The case spotlights the unresolved tension between privacy policies, corporate liability concerns, and the public’s demand for proactive threat reporting.

What OpenAI Saw—and Why It Didn’t Call Police

OpenAI said it detected abusive activity on an account belonging to Jesse Van Rootselaar in June 2025 and considered referring the matter to the Royal Canadian Mounted Police for “furtherance of violent activities.” The company ultimately decided against contacting authorities because the material did not meet its internal threshold of an “imminent and credible risk of serious physical harm.” OpenAI then banned the account for policy violations, according to reporting that cited company statements.

That decision sits at the center of the controversy because it exposes how much discretion private tech companies retain when they see troubling content. OpenAI’s description suggests the account raised red flags but fell into a gray zone short of an actionable, time-bound threat. The available reporting does not detail what prompts were entered or what outputs were produced, limiting the public’s ability to judge whether the “imminent harm” standard was applied too narrowly.

The February 2026 Tumbler Ridge Attack and Its Victims

Canadian authorities said Van Rootselaar, 18, carried out the attack on February 10, 2026, in Tumbler Ridge, a remote community of about 2,700 people in British Columbia. Investigators said the violence began at home with the killing of the shooter’s mother and stepbrother. Police said the shooter then attacked a school, killing a 39-year-old teaching assistant and five students aged 12 to 13.

Authorities said Van Rootselaar died from a self-inflicted gunshot wound. Reports described the incident as among Canada’s worst school-related attacks in recent memory, with police tape surrounding the school as the community tried to process the scale of the loss. The reporting also noted uncertainty around motive and limited publicly available detail about how much, if at all, AI interactions influenced the planning or mindset behind the violence.

RCMP Investigation and OpenAI’s Post-Shooting Cooperation

After the shooting, OpenAI said it reached out to the RCMP and provided information connected to the account, stating it would continue supporting the investigation. The RCMP confirmed the attack sequence described by OpenAI’s account—home first, then school—and said the motive remained unclear. As of the latest update in the provided research, investigators had not released a public explanation tying any specific ChatGPT activity to the timeline of the murders.

This distinction matters because a technology angle can quickly turn into political theater if facts are thin. The reporting indicates OpenAI acted decisively in one sense—banning the account—while remaining reactive in another sense by not alerting police until after people were dead. Without more disclosure about what was detected in June 2025, the public debate risks becoming a proxy fight over tech regulation rather than a focused discussion of threat reporting standards.

Privacy vs. Public Safety: The “Imminent Harm” Threshold Problem

The core policy question is whether “imminent and credible risk” is the right trigger for notifying law enforcement when a platform detects content that appears to support violence. A strict threshold can protect user privacy and reduce false alarms, but it can also delay intervention when a user is escalating toward harm. The limited information available suggests OpenAI weighed this balance and chose non-referral, highlighting how private companies effectively set public-safety rules without public accountability.

For Americans already wary of bureaucratic failure and institutional negligence, the takeaway is not that AI should become a surveillance arm of the state. The takeaway is that unclear standards invite inconsistent decisions—some overly cautious, others overly permissive—while families live with the consequences. If policymakers respond, any framework should be narrowly tailored: focused on credible threats of violence, transparent about thresholds, and respectful of due process rather than open-ended monitoring.

In the meantime, the clearest confirmed facts are also the hardest: OpenAI detected troubling activity months earlier, did not notify police at the time, and later shared information after the killings. That sequence is now part of the RCMP’s broader investigation into how this happened in a small, isolated town. What remains unresolved—because reporting is limited—is whether any earlier alert would have changed outcomes, or whether existing mental health and law-enforcement touchpoints already failed to prevent the attack.

Sources:

ChatGPT-maker OpenAI considered alerting Canadian police about school shooting suspect months ago