AI technology debate: Pentagon and Anthropic at odds over weapon use
Key Points:
- Pentagon official Emil Michael revealed that the dispute with AI company Anthropic arose from disagreements over ethical restrictions on using its AI chatbot Claude for fully autonomous weapons in the U.S. military's Golden Dome missile defense program.
- The Pentagon designated Anthropic a supply chain risk, cutting off its defense contracts, while Anthropic plans to sue over the designation, which also led to former President Trump ordering federal agencies to stop using Claude within six months.
- Michael criticized Anthropic's restrictive terms of use, arguing the military needs AI technology that allows for all lawful uses, including autonomous drone swarms and missile defense, to keep pace with rivals like China.
- Anthropic maintains it only sought to restrict Claude’s use in mass surveillance and fully autonomous weapons, disput