Judge blocks Pentagon’s effort to ‘punish’ Anthropic by labeling it a supply chain risk
Key Points:
- A federal judge in California has indefinitely blocked the Pentagon's attempt to label AI company Anthropic as a supply chain risk and sever government ties, ruling that this action violated Anthropic's First Amendment and due process rights.
- Judge Rita Lin criticized the Pentagon's designation as retaliation for Anthropic's refusal to relax contractual guardrails on its Claude AI model, particularly regarding autonomous weapons and mass surveillance.
- The supply chain risk label would have required military contractors to prove they did not use Anthropic products, a designation previously reserved for companies linked to foreign adversaries.
- Anthropic praised the ruling and emphasized its commitment to working with the government to ensure safe and reliable AI, despite the legal battle to protect its rights and contracts.
- The Department of Defense argued it needed unrestricted access to Claude for wartime uses, but Anthropic maintained two firm restrictions, leading to the dispute and ongoing legal challenges.