Published on April 19, 2026
For months, Anthropic has been at odds with the Pentagon, restricting its use of AI technologies within government operations. The turbulent landscape shifted recently when it was revealed that the National Security Agency is employing Anthropic’s new AI model, Mythos Preview, for critical security tasks. This development comes despite a directive from former President Trump aimed at limiting government agencies’ access to Anthropic’s services.
The controversy began when Anthropic refused to compromise on safety measures during contract discussions, leading to Trump’s directive in February. However, just days before the NSA’s utilization of Mythos was confirmed, Anthropic CEO Dario Amodei met with White House officials to discuss the implications and capabilities of the new model. The White House termed the meeting “productive,” although Trump expressed unawareness of it.
Anthropic’s legal battles with the US government continue to escalate. Following the Pentagon’s designation of the company as a “supply chain risk,” Anthropic filed lawsuits in multiple courts. While one court temporarily blocked this classification, another declined Anthropic’s request for an injunction, complicating its standing in the face of federal scrutiny.
The NSA’s engagement with Mythos signals a significant shift in the internal dynamics of AI governance within federal agencies. ’s technology, the NSA may leverage enhanced capabilities for cybersecurity, even as the company navigates a complicated legal landscape. The outcome of these tensions could redefine the relationship between emerging AI technologies and national security protocols.
Related News
- New Framework Advances Understanding of Large Language Models
- Gemini 3.1 Flash Live Enhances Audio AI Capabilities
- ChatGPT Revolutionizes Image Creation in Minutes
- Blue Origin Eyes Expansion Amid Surge in Satellite Launch Demand
- Privacy-First UX: A New Standard in the Age of AI
- Silex Revolutionizes Legal Practices with AI Innovation