Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI
Key Points:
- Anthropic accuses DeepSeek, MiniMax, and Moonshot of misusing its Claude AI model through "industrial-scale campaigns" involving approximately 24,000 fraudulent accounts and over 16 million interactions to improve their own AI products.
- The companies are alleged to have engaged in "distillation," a process of training smaller AI models based on more advanced ones, which Anthropic acknowledges as legitimate but warns can be exploited for illicit purposes, including bypassing safeguards.
- Anthropic warns that illicitly distilled models may lack essential protections, potentially enabling authoritarian regimes to use advanced AI for military, intelligence, surveillance, cyber operations, and disinformation campaigns.
- DeepSeek reportedly conducted over 150,000 exchanges with Claude, targeting its reasoning abilities and