In a groundbreaking legal case with profound implications for artificial intelligence accountability, OpenAI faces a civil lawsuit from the family of a school shooting victim who alleges the company failed to prevent one of Canada’s deadliest mass shootings.
The lawsuit centers on the February 10th Tumbler Ridge school shooting that left eight dead, including five young children and the shooter’s mother. Twelve-year-old Maya Gebala, who sustained catastrophic head and neck injuries during the attack, remains hospitalized. Her mother, Cia Edmonds, filed the suit claiming OpenAI possessed specific knowledge of the shooter’s plans but neglected to alert authorities.
According to court documents, the suspect, 18-year-old Jesse Van Rootselaar, established a ChatGPT account before turning 18—reportedly without proper age verification—and engaged the AI in extensive discussions about “various scenarios involving gun violence” during late spring or early summer 2025. The conversations prompted twelve OpenAI employees to flag the content as indicating “imminent risk of serious harm to others” and recommend notifying Canadian law enforcement.
Instead of contacting authorities, OpenAI allegedly merely banned the suspect’s initial account. Court documents claim the company’s internal threshold for reporting credible threats wasn’t met, enabling Van Rootselaar to create a second account and continue planning the attack despite previous flags within OpenAI’s systems.
The lawsuit argues that ChatGPT served as the shooter’s “trusted confidante” during the planning stages, and that OpenAI’s inaction directly contributed to the tragedy. Gebala, who was shot three times while attempting to lock a library door to protect others, suffered life-altering injuries including severe brain trauma.
In response to mounting pressure, OpenAI CEO Sam Altman virtually met with Canadian AI Minister Evan Solomon and British Columbia Premier David Eby on March 4th. During the meeting, Altman reportedly pledged to strengthen police notification protocols and apologized to the Tumbler Ridge community.
The company has since implemented operational changes, including engaging mental health professionals to assess risky interactions and establishing more flexible criteria for law enforcement referrals. In an open letter to Canadian officials, OpenAI stated that under current guidelines, the suspect’s account would have been reported to authorities.
Canadian officials acknowledge OpenAI’s willingness to improve but emphasize that detailed implementation plans remain pending. The case represents a critical test for AI companies’ responsibilities in identifying and preventing real-world violence facilitated through their platforms.
