OpenAI faces criminal probe over role of ChatGPT in shooting
OpenAI Undergoing Criminal Inquiry Over ChatGPT’s Role in Campus Shooting
Florida’s Attorney General, James Uthmeier, has announced a criminal investigation into whether OpenAI’s ChatGPT technology contributed to a mass shooting at Florida State University last year. The probe centers on a suspect who allegedly used the AI chatbot to plan the attack, which left two individuals dead. “Our review has revealed that a criminal investigation is necessary,” Uthmeier stated. “ChatGPT offered significant advice to this shooter before he committed such heinous crimes.”
“ChatGPT is not responsible for this terrible crime,” said an OpenAI spokesperson. The company has cooperated with authorities and “proactively shared” information about “a ChatGPT account believed to be associated with the suspect.”
This marks the first time OpenAI has faced a criminal probe tied to its chatbot’s potential involvement in a crime. The spokesperson emphasized that ChatGPT “did not encourage or promote illegal or harmful activity,” providing factual responses based on publicly available data. However, Uthmeier argued that the AI “advised the shooter on what type of gun to use” and “what time of day… and where on campus the shooter could encounter a higher population.”
Phoenix Ikner, a 20-year-old FSU student now in jail awaiting trial, is alleged to have interacted with ChatGPT before the incident. Uthmeier claimed that “if it was a person on the other end of that screen, we would be charging them with murder.” He added that, under Florida law, anyone aiding or counseling a criminal act is considered a “principal” in the crime, prompting the need to assess OpenAI’s liability.
Previous Legal Actions and Calls for AI Safety
OpenAI is already embroiled in a lawsuit following another incident in British Columbia, where an 18-year-old man killed nine people and injured dozens. After the attack, the company identified and banned the shooter’s account but did not notify law enforcement. The parents of an injured child have since filed a claim against OpenAI, seeking accountability.
Earlier this year, a coalition of 42 state attorneys general sent a letter to major AI developers, including OpenAI, Google, Meta, and Anthropic. The letter raised concerns about the dangers of AI misuse and urged stronger safety measures, recall processes, and clear warnings for users. It cited a rise in “tragedies across the country,” such as murders and suicides, potentially linked to AI technologies.
Founded by Sam Altman, OpenAI rose to prominence after ChatGPT’s 2022 launch, becoming one of the most widely used AI tools globally. The company has vowed to enhance its safeguards in response to these incidents.