On February 10, one of the deadliest mass shootings in Canadian history unfolded in the small northern British Columbia community of Tumbler Ridge, leaving eight people dead — six of them children. The 18-year-old gunman, Jessie Van Rootselaar, who opened fire at the town’s secondary school, ultimately died from a self-inflicted gunshot wound. Among the survivors is 12-year-old Maya Gebala, who remains hospitalized after being shot three times in the head, neck, and cheek. Months after the tragedy, a wave of groundbreaking litigation has placed one of the world’s most valuable tech companies at the center of growing scrutiny over AI safety accountability. Seven families of those killed and injured in the attack have filed a new lawsuit in a California state court against OpenAI and its chief executive Sam Altman, marking one of the first major legal attempts to hold a leading AI developer responsible for a violent act linked to its platform. The suit replaces an earlier smaller claim filed in a Canadian court by Gebala’s family, which is being voluntarily withdrawn as the legal team expands its action. Lead counsel Jay Edelson, who leads a joint US-Canadian legal team representing the families, confirmed he expects to file more than two dozen additional jury trial claims on behalf of other victims and impacted community members in the coming weeks. The core allegation of the litigation is that OpenAI’s executive leadership, including Altman, acted with gross negligence and intentionally chose corporate profit and reputation over public safety when they ignored repeated warnings from their own safety team about the gunman’s harmful activity on ChatGPT. According to the suit, Van Rootselaar’s conversations with ChatGPT, which included detailed descriptions of gun violence scenarios and attack planning, were flagged as an imminent threat by OpenAI’s internal 12-person safety monitoring team months before the shooting. The team formally recommended that the activity be reported to the Royal Canadian Mounted Police (RCMP), but senior OpenAI leadership vetoed the decision. The complaint alleges that leadership blocked the alert to protect OpenAI’s $850 billion valuation and public image, writing that “they did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk.” The suit further claims that OpenAI falsely stated it banned Van Rootselaar from the platform after flagging his activity, but the company’s loose account policies allowed the gunman to easily create a new account under his own name and continue planning the attack unimpeded. OpenAI has pushed back against these claims, asserting that it revokes access for banned users and implements measures to prevent repeat account creation. The company also said it has a strict zero-tolerance policy for any use of its tools to facilitate violence. In the weeks after the shooting, Altman issued a public apology to the victim families in an open letter published by local outlet Tumbler Ridge Lines. “I am deeply sorry that we did not alert law enforcement,” Altman wrote, adding “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” Since the lawsuit was filed, OpenAI has moved quickly to implement visible changes to its safety protocols, releasing a public blog post this Tuesday outlining updated procedures for responding to potentially dangerous user behavior. A company spokesperson confirmed that OpenAI has already strengthened its internal safeguards, including improved risk assessment and escalation protocols for potential violent threats. The company has also committed to working with Canadian officials at all levels of government to prevent similar tragedies, a promise Altman reiterated in his apology letter. Edelson’s legal team has been pushing for access to Van Rootselaar’s full ChatGPT chat logs, which OpenAI has so far refused to release. The legal team expects to compel disclosure through the discovery process of the California lawsuit, with plans to present the internal decision-making directly to a jury. “We’re going to put the jury in the room when the decision was made to not tell the Canadian authorities,” Edelson told the BBC. “We’re going to show them how people were jumping up and down saying we need to protect this town, and we’re going to show them how Sam Altman and OpenAI routinely make these decisions to put their own interests first.” This litigation is not the only scrutiny OpenAI is facing over links between its platform and violent attacks. The company is already the subject of an ongoing criminal probe in Florida connected to a 2025 shooting at Florida State University that left two people dead and multiple others injured, where the accused shooter is reported to have used ChatGPT ahead of the attack. The Tumbler Ridge lawsuit has opened a new chapter in global debates about AI governance, forcing a public test of whether tech developers can be held legally liable for failing to mitigate known threats stemming from their generative AI tools.
Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims
