Anthropic, a leading artificial intelligence company, has submitted two sworn declarations to a California federal court, vigorously contesting the Pentagon’s assertion that the firm poses an "unacceptable risk to national security." Filed late Friday afternoon, these declarations argue that the government’s case is built upon fundamental technical misunderstandings and allegations that were never raised during the extensive months of negotiations preceding the contentious dispute. These filings accompany Anthropic’s reply brief in its lawsuit against the Department of Defense (DoD) and precede a pivotal hearing scheduled for this coming Tuesday, March 24, before Judge Rita Lin in San Francisco, a session expected to shed further light on the escalating legal battle.
The Genesis of a High-Stakes Dispute
The current legal confrontation between Anthropic and the Pentagon is rooted in a series of events from late February, when then-President Trump and Defense Secretary Pete Hegseth publicly announced a decision to sever ties with the AI company. This dramatic declaration came after Anthropic reportedly refused to grant the military unrestricted use of its advanced AI technology, setting the stage for a broader debate over the ethical boundaries and control mechanisms of artificial intelligence in defense applications. The dispute quickly escalated from a negotiation impasse into a full-blown legal challenge, with Anthropic arguing that the DoD’s subsequent "supply-chain risk designation" – a rare and severe measure – was retaliatory and violated the company’s First Amendment rights. The Pentagon, conversely, maintains that its decision was a straightforward national security call, necessitated by what it perceives as Anthropic’s restrictive policies on military use.
Anthropic’s Defense: Unpacking the Sworn Declarations
To bolster its legal position, Anthropic presented declarations from two key executives: Sarah Heck, the company’s Head of Policy, and Thiyagu Ramasamy, its Head of Public Sector. Their testimonies aim to dismantle the core arguments put forth by the government, offering insider perspectives on the negotiations and the technical realities of Anthropic’s AI deployment.
Sarah Heck’s Account: Dispelling Misconceptions and Revealing Timeline Discrepancies
Sarah Heck, a seasoned professional with a background in national security, having served at the National Security Council under the Obama administration before transitioning to the tech sector, plays a crucial role in Anthropic’s government relations and policy work. Her declaration directly addresses what she identifies as a central misrepresentation in the government’s filings: the claim that Anthropic demanded an "approval role" over military operations. Heck unequivocally refutes this, stating, "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role." This assertion challenges the very premise of the Pentagon’s argument regarding Anthropic’s purported desire for operational oversight, painting it as a fabricated concern.
Furthermore, Heck’s declaration highlights another significant procedural lapse: the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was, according to her, never raised during the extensive negotiation period. This critical security concern, she alleges, surfaced for the first time in the government’s court filings, effectively denying Anthropic any prior opportunity to address or clarify the issue. This suggests a pattern where the government’s stated reasons for the designation were either newly formulated or deliberately withheld during negotiations, only to be weaponized in court.
Perhaps one of the most compelling pieces of evidence in Heck’s declaration is an email from March 4 – merely one day after the Pentagon formally finalized its supply-chain risk designation against Anthropic. In this email, Under Secretary Emil Michael, a key Pentagon official involved in the dispute, communicated to Anthropic CEO Dario Amodei that the two sides were "very close" on the very issues the government now cites as evidence of Anthropic being a national security threat: its positions on autonomous weapons and mass surveillance of Americans. This email, attached as an exhibit, creates a stark contrast with Michael’s subsequent public statements. On March 5, Amodei himself published a statement mentioning "productive conversations" with the Pentagon. Yet, the day after, Michael posted on X (formerly Twitter) declaring, "there is no active Department of War negotiation with Anthropic," and a week later, told CNBC there was "no chance" of renewed talks.
Heck’s declaration implicitly questions the government’s sincerity and consistency. If Anthropic’s stance on autonomous weapons and mass surveillance was indeed the linchpin of its national security risk, why was a senior Pentagon official indicating near-alignment on these very issues just hours after the designation was finalized? This timeline suggests that the designation might have been used as a bargaining chip or a punitive measure rather than a genuine reflection of insurmountable security concerns, a point that, while not explicitly stated by Heck, is strongly implied by the chronology she meticulously lays out.
Thiyagu Ramasamy’s Expertise: Debunking Technical "Kill Switch" Fears
Thiyagu Ramasamy brings a critical technical perspective to Anthropic’s defense. Before joining Anthropic in 2025, he accumulated six years of experience at Amazon Web Services, managing AI deployments for government clients, including those in classified environments. At Anthropic, he is credited with building the team responsible for integrating its Claude models into national security and defense settings, including the highly publicized $200 million contract with the Pentagon announced last summer. His expertise is central to challenging the technical feasibility of the Pentagon’s concerns.
Ramasamy’s declaration directly confronts the government’s claim that Anthropic could theoretically interfere with military operations by remotely disabling its technology or altering its behavior. He asserts that such interference is technically impossible. According to his explanation, once Claude is deployed within a government-secured, "air-gapped" system – a highly isolated network operated by a third-party contractor – Anthropic loses all access. He clarifies that there is no remote kill switch, no hidden backdoor, and no mechanism for Anthropic to push unauthorized updates. Any notion of an "operational veto" from Anthropic is, in his words, a "fiction." He further explains that any modification to the AI model would necessitate the Pentagon’s explicit approval and active installation, effectively placing control firmly in the hands of the military.
To further assuage fears of data compromise, Ramasamy states that Anthropic cannot even monitor what government users input into the system, let alone extract any sensitive data. This technical safeguard directly counters potential espionage concerns or unauthorized data access.
Addressing another point of contention, Ramasamy disputes the government’s claim that Anthropic’s employment of foreign nationals inherently constitutes a security risk. He emphasizes that all Anthropic employees involved in government projects undergo rigorous U.S. government security clearance vetting – the identical background check process required for access to classified information. He adds that, "to my knowledge," Anthropic stands as the sole AI company where cleared personnel are directly responsible for building the AI models specifically designed to operate within classified environments, underscoring a commitment to security protocols that, he argues, surpasses industry standards.
The Pentagon’s Counter-Arguments and the First Amendment Battle
In its earlier 40-page filing, the government unequivocally rejected Anthropic’s framing of the dispute. The DoD argued that Anthropic’s refusal to permit all lawful military uses of its technology was purely a "business decision," and therefore not protected speech under the First Amendment. The Pentagon maintained that the supply-chain risk designation was a legitimate "national security call," not a punitive measure for the company’s views on AI safety. This stance frames the conflict as a matter of operational necessity and sovereign security, rather than a suppression of corporate free speech or an overreach of government power. The DoD’s position emphasizes that it must have unfettered access and control over technologies critical to its missions, especially when those technologies are deployed in sensitive military contexts.
Chronology of a Contentious Relationship
The timeline of the Anthropic-Pentagon saga illustrates a rapid deterioration from collaboration to confrontation:
- Summer 2025: Anthropic secures a significant $200 million contract with the Pentagon to advance responsible AI in defense operations, signaling a promising partnership.
- Months Preceding February 2026: Anthropic and the Department of Defense engage in negotiations regarding the terms of AI technology use, with Anthropic reportedly advocating for restrictions on autonomous weapons and mass surveillance.
- Late February 2026: President Trump and Defense Secretary Pete Hegseth publicly announce the termination of ties with Anthropic, citing the company’s refusal to allow unrestricted military use of its AI.
- March 3, 2026: The Pentagon formally finalizes its "supply-chain risk designation" against Anthropic, a severe measure effectively barring the company from future defense contracts. This marks the first time such a designation has been applied to an American company.
- March 4, 2026: Under Secretary Emil Michael emails Anthropic CEO Dario Amodei, indicating that the two sides were "very close" on key contentious issues, directly contradicting the immediate rationale for the designation.
- March 5, 2026: Anthropic CEO Dario Amodei issues a public statement acknowledging "productive conversations" with the Pentagon, seemingly hopeful for a resolution.
- March 6, 2026: Under Secretary Michael posts on X, stating, "there is no active Department of War negotiation with Anthropic," signaling a hardening of the Pentagon’s stance.
- A Week Later (approx. March 13, 2026): Michael tells CNBC there is "no chance" of renewed talks, further solidifying the breakdown in communication and trust.
- Earlier This Week (approx. March 17-18, 2026): The government files a 40-page brief rejecting Anthropic’s First Amendment arguments and reiterating its national security concerns.
- Late Friday Afternoon (March 21, 2026): Anthropic files its reply brief, accompanied by the sworn declarations from Sarah Heck and Thiyagu Ramasamy, directly challenging the government’s claims.
- Tuesday, March 24, 2026: A crucial hearing is scheduled before Judge Rita Lin in San Francisco, where both sides will present their arguments.
Broader Context: AI, National Security, and Ethical Imperatives
This legal battle unfolds against a backdrop of increasing urgency surrounding the integration of artificial intelligence into national security frameworks. The ethical implications of AI, particularly in military applications, have become a global concern. Companies like Anthropic have publicly articulated principles for responsible AI development, often including safeguards against autonomous weapons that operate without meaningful human control, or AI used for indiscriminate mass surveillance.
The Pentagon, meanwhile, is under immense pressure to maintain a technological edge, viewing advanced AI as critical for future defense capabilities. The tension arises when a company’s ethical guidelines clash with a government’s perceived operational necessities. This case highlights a fundamental struggle: how to balance the innovation and power of private sector AI with the unique demands and moral responsibilities of national defense. The "supply-chain risk designation" applied to Anthropic, a measure typically reserved for foreign entities or companies with demonstrable security vulnerabilities, underscores the extreme nature of the Pentagon’s response and its determination to assert control over technologies deemed vital.
Legal Precedent and Implications for AI-Defense Collaboration
The outcome of Anthropic v. DoD could set a significant legal precedent for the nascent and rapidly evolving relationship between advanced AI developers and national defense agencies. If Anthropic’s First Amendment argument prevails, it could establish a critical protection for tech companies to articulate and adhere to ethical guidelines, even when those guidelines conflict with government demands. Conversely, a victory for the Pentagon could empower the government to exert greater control over the terms of technology use, potentially chilling future collaborations with companies that prioritize ethical AI development.
The designation itself, being the first of its kind against an American company, is particularly noteworthy. It signals a new era where domestic tech firms, previously seen as partners in innovation, could be categorized as national security risks if their policies diverge too sharply from governmental requirements. This case could redefine the boundaries of corporate autonomy versus national security imperatives in the digital age.
Beyond the immediate legal ramifications, the dispute has profound implications for the future of AI-defense collaboration. Other AI companies, many of whom share similar ethical concerns regarding military applications, will be closely watching the proceedings. The resolution of this case could influence how these companies structure their agreements with defense clients, potentially leading to more explicit contractual language regarding ethical use, or conversely, making them more hesitant to engage with the defense sector at all. The very definition of "responsible AI" in a military context is being litigated, with far-reaching consequences for technology development and global security.
The Road Ahead: Awaiting Judge Lin’s Decision
As the hearing approaches on Tuesday, all eyes will be on Judge Rita Lin’s courtroom in San Francisco. The judge will consider the arguments presented in Anthropic’s reply brief and the sworn declarations, alongside the government’s earlier filings. The immediate focus will be on whether Anthropic’s claims of technical misunderstanding and procedural missteps can sway the court, and whether the First Amendment argument holds weight against the government’s national security claims. The court’s decision will not only determine the fate of Anthropic’s lawsuit but also send a powerful message about the future of AI governance, national security, and the delicate balance between technological innovation and ethical responsibility. The legal battle between Anthropic and the Pentagon is more than just a corporate dispute; it is a critical test case that will shape the landscape of AI in defense for years to come.








