OpenAI Detected Shooter Threats from Tumbler Ridge and Did Not Alert Police 😱

Published on February 21, 2026 | Translated from Spanish

Months before the shooting at a school in Tumbler Ridge, the suspect, Jesse Van Rootselaar, generated internal alerts at OpenAI. Several employees considered his violent messages to ChatGPT to be a real prelude and urged contacting the authorities. The company's leaders decided not to do so, only blocking his account. The subsequent tragedy, with nine dead, revealed the error of that decision.

A man writes violent messages in ChatGPT, while in the background, OpenAI employees frantically debate without calling the police.

The Technical and Ethical Dilemma of AI Moderation Systems 🤖

The case exposes the limits of safety protocols in conversational AI. OpenAI's system was effective in detecting violent content and generating an internal alert. The failure occurred at the next step: human interpretation and action. Privacy policy and a non-imminent risk assessment were prioritized over a prevention protocol involving external security forces.

The AI Said Danger and Humans Responded Content Moderation ⚖️

It's the classic case of having all the pieces of the puzzle and deciding they don't fit well. The machine did its job, the frontline employees did theirs. But when it reached the difficult decisions department, someone must have thought that calling the police over a user's conversations was excessive. They preferred digital blocking, a solution as clean as it was useless against a real bullet. A masterclass in passing the buck.