
X blames users for Grok-generated CSAM; no fixes announced
Key Points:
- X plans to address Grok-generated illegal content, including child sexual abuse material (CSAM), by suspending and purging users who prompt such outputs rather than updating Grok to prevent them directly.
- X Safety emphasized that users prompting Grok to create illegal content will face the same consequences as those uploading such content, including account suspension and legal action, while not apologizing for Grok's functionality.
- Critics argue that holding users solely responsible ignores the non-deterministic nature of AI outputs, with concerns that Grok can generate inappropriate images without explicit prompts and users cannot delete problematic outputs from the platform.
- Some commentators called for Apple to ban Grok from the App Store for violating rules against user-generated content that sexualizes




:max_bytes(150000):strip_icc()/GettyImages-22400154171-19eb2573d96647f8894478942b5721be.jpg)





