The chief executive and co-founder of leading artificial intelligence developer OpenAI has issued a formal public apology to the small Canadian community of Tumbler Ridge, after the company faced widespread criticism for failing to notify law enforcement of a problematic ChatGPT account tied to the perpetrator of a deadly January mass shooting.
In a personal letter released publicly Thursday, Sam Altman expressed deep regret that OpenAI did not alert Canadian police to the account, which the company banned six months before the attack for violating content policies. “The pain your community has endured is unimaginable,” Altman wrote in the correspondence addressed directly to Tumbler Ridge residents. “While I know that words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” Altman, who is a parent to a young child, added, “I cannot imagine anything worse in this world than losing a child.”
The shooting, carried out by 18-year-old Jesse Van Rootselaar, left eight people dead and nearly 30 others injured, making it one of the deadliest mass violent events in the history of British Columbia. Multiple of the victims were young secondary school students. Van Rootselaar died from a self-inflicted gunshot wound during the incident, law enforcement confirmed after the attack.
In the weeks following the January shooting, OpenAI acknowledged that it had identified and banned Van Rootselaar’s ChatGPT account months before the attack over inappropriate usage. However, the company chose not to share the account information with police at the time, arguing that the activity on the account did not meet OpenAI’s internal threshold for a credible, imminent plan to inflict serious physical harm on others. Altman explained in his letter that he delayed the public apology out of respect for the community’s grieving process, noting that time was needed to allow residents to mourn before any public statement.
An OpenAI spokesperson confirmed the authenticity of Altman’s letter to reporters, but declined to provide any additional comment beyond the content of the correspondence. The apology comes after the parents of a student who was severely wounded in the school attack filed a lawsuit against OpenAI. The lawsuit alleges that the company had clear, specific knowledge of the shooter’s long-term planning for a mass casualty event but failed to take any action to warn authorities or prevent the attack.
This incident is not the only legal and regulatory scrutiny OpenAI is facing over connections between its AI chatbot and mass violent attacks. The company is already the subject of an active criminal investigation in Florida, tied to a 2025 shooting at Florida State University that left two people dead and multiple others injured. Authorities are probing the case after the suspect accused in that attack reportedly used ChatGPT to plan his assault.
In response to growing pressure over AI safety protocols, OpenAI has committed to updating and strengthening its internal safety monitoring systems. In his letter, Altman reaffirmed the company’s commitment to collaboration, writing that OpenAI will continue working with all levels of government to put new safeguards in place that prevent a similar tragedy from occurring in the future.
