Sam Altman apologises after OpenAI chose not to report ChatGPT user who carried out Tumbler Ridge school shooting
Key Points:
- OpenAI CEO Sam Altman apologized to the Tumbler Ridge community for failing to alert law enforcement after its systems flagged a ChatGPT user who later committed Canada’s deadliest school shooting in nearly 40 years, killing eight and injuring 27.
- In June 2025, OpenAI employees recommended reporting the flagged account due to signs of imminent harm, but leadership overruled them, applying a higher threshold for threat reporting; the account was banned but police were not notified.
- OpenAI has since lowered its reporting threshold and established direct contact with the Royal Canadian Mounted Police (RCMP), but these changes are voluntary and Canada has no legal requirement for AI companies to report threats detected on their platforms.
- The incident highlights a broader pattern where AI companies identify dangerous behavior internally but make discretionary decisions about reporting without external oversight or legal obligations, raising concerns about accountability and safety governance.
- Canadian officials acknowledge the apology but call it insufficient, emphasizing the need for legal frameworks and regulatory standards to ensure AI companies responsibly handle threats identified through their systems to prevent future tragedies.