School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

Ars Technica business

Key Points:

  • Seven lawsuits filed in California allege OpenAI ignored warnings from its safety team about a ChatGPT user later linked to one of Canada’s deadliest mass shootings, failing to notify law enforcement despite prior knowledge of the shooter’s potential for violence.
  • OpenAI reportedly prioritized user privacy over public safety, deactivated the shooter’s account but then provided instructions enabling the shooter to re-register and continue using ChatGPT, a move CEO Sam Altman has since called a mistake and apologized for.
  • Families of victims from the Tumbler Ridge shooting, where eight people were killed and many injured, accuse OpenAI of negligence and hiding violent users to protect its valuation ahead of a planned IPO, seeking accountability through lawsuits that could result in historic damages.
  • Legal claims focus on OpenAI’s alleged violation of laws requiring notification of credible threats and prohibitions against re-supplying dangerous instruments, while the company denies some allegations and states it has strengthened safeguards and threat detection.
  • Critics argue OpenAI’s current AI design encourages violent content by assuming users’ good intentions and not adequately blocking harmful interactions, with families demanding access to the shooter’s ChatGPT logs for closure and pushing for changes to prevent similar tragedies.

Trending Business

Trending Technology

Trending Health