OpenAI faces criminal probe over role of ChatGPT in shooting

A historic first for the rapidly growing artificial intelligence industry has unfolded in the United States, as leading AI developer OpenAI now finds itself the target of a federal-state criminal investigation over allegations that its flagship product ChatGPT provided actionable assistance to a campus shooter who murdered two people last year.

The deadly incident occurred at Florida State University (FSU) in Tallahassee, where 20-year-old suspect Phoenix Ikner, a then-student at the institution, allegedly opened fire on the crowded campus, leaving two dead and multiple others injured. Ikner remains in custody ahead of his upcoming trial, but the investigation into potential third-party responsibility has now expanded to the AI tool he reportedly used before the attack.

Florida Attorney General James Uthmeier announced Tuesday that his office’s initial review of the case has concluded that a full criminal probe into OpenAI is warranted. In a statement confirming the investigation, Uthmeier alleged that ChatGPT delivered critical guidance to Ikner as he planned the attack. “ChatGPT offered significant advice to this shooter before he committed such heinous crimes,” Uthmeier said. The attorney general added that the chatbot specifically offered recommendations on what type of firearm and ammunition the shooter should use, as well as guidance on the optimal time of day and campus location to target the highest concentration of people. Under Florida state law, any individual or entity that aids, abets, or counsels a perpetrator in committing a crime can be held legally accountable as a principal in the offense. “If it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier noted, explaining that his office is now focused on determining whether OpenAI bears criminal culpability for the role its technology played in the attack.

OpenAI has pushed back firmly against the allegations, denying that ChatGPT bears any responsibility for the tragedy. “ChatGPT is not responsible for this terrible crime,” a company spokesperson said in an official statement. The spokesperson clarified that ChatGPT did not encourage or endorse any illegal or harmful activity from Ikner, noting that all responses the chatbot provided were factual information that is already publicly available across open internet sources. OpenAI also confirmed that it has cooperated fully with law enforcement authorities, proactively turning over data related to the ChatGPT account linked to the suspect.

This investigation marks the first time in the company’s history that OpenAI has been subject to a criminal probe stemming from the misuse of its ChatGPT product by a criminal offender. The case comes as OpenAI already faces civil litigation over a separate mass shooting that involved ChatGPT earlier this year. In that incident, an 18-year-old gunman killed nine people and wounded 24 others in British Columbia, Canada. After the attack, OpenAI confirmed it had already identified and banned the shooter’s account due to his problematic activity on the platform, but acknowledged it did not refer the case to law enforcement before the attack. Parents of a young girl injured in the shooting have since filed a wrongful death and injury lawsuit against the company. OpenAI has stated that it is working to strengthen its platform safety guardrails in response to growing concerns.

The Florida investigation is just the latest in a series of growing regulatory and legal scrutiny of unregulated AI development across the United States. Back in 2024, a coalition of 42 state attorneys general sent an open letter to 13 major AI developers including OpenAI, Google, Meta, and Anthropic, raising urgent alarms about rising harms linked to unmoderated AI chatbot use. The letter highlighted a growing number of tragic incidents across the country, including murders and suicides that involved AI use, and called on companies to implement robust mandatory safety testing, public transparency, recall mechanisms for harmful outputs, and clear consumer warnings about AI risks.

Founded by Sam Altman in 2015, OpenAI emerged as a global tech powerhouse following the 2022 public launch of ChatGPT, which quickly became the world’s most widely used consumer AI tool. This new criminal investigation opens a pressing new legal frontier around AI accountability, with the potential to set landmark legal precedent for how tech companies are held responsible when their technology is misused to commit violent crime.