Pentagon says it is labeling AI company Anthropic a supply chain risk 'effective immediately'
Summary
The Pentagon has labeled AI company Anthropic a supply chain risk, effectively banning its AI chatbot Claude from military use and prompting legal challenges and criticism.
Key Points
- The Pentagon has designated AI company Anthropic and its chatbot Claude as a supply chain risk effective immediately.
- This decision restricts military use of Claude, leading some defense contractors like Lockheed Martin to cut ties with Anthropic.
- Anthropic CEO Dario Amodei announced plans to legally challenge the designation, arguing it is unjustified.
- The move has drawn criticism from politicians, former national security officials, and industry experts who see it as a dangerous precedent.