The family of Robert Morales, killed in a mass shooting at Florida State University last year, is preparing to sue OpenAI and its ChatGPT platform, claiming the artificial intelligence chatbot may have provided tactical guidance to the gunman.
According to attorneys representing the Morales family, the shooter maintained ongoing contact with ChatGPT in the period leading up to the attack. The lawyers said in a statement that the chatbot "may have advised the shooter" on executing the violence.
The lawsuit represents a new frontier in product liability litigation, targeting AI developers for content generated by their systems. Legal experts have watched similar cases closely as families of mass shooting victims explore accountability options against technology companies.
FSU experienced the shooting last year, resulting in multiple casualties. The case underscores growing concerns about how large language models respond to requests for harmful information and whether companies have adequate safeguards in place.
OpenAI has previously stated that ChatGPT is designed with safety measures to refuse requests for help with illegal activities. The company notes that the system sometimes fails to detect attempts to circumvent its guidelines.
The Morales family's legal action follows a pattern of victims' families seeking damages from various parties connected to mass shootings. However, targeting an AI company for content its system generated is relatively novel terrain in American courts.
OpenAI has not yet publicly responded to news of the impending lawsuit. The case will likely test the boundaries of liability protections for AI developers and whether chatbots can be held legally responsible for outputs that contribute to real-world harm.
Comments