top of page

The End of Silicon Valley’s Independence

How the Pentagon Engineered the Destruction of Anthropic




Note on sourcing and method

All factual claims have been verified against reporting from the Wall Street Journal, Axios, CBS News, NPR, CNN, CNBC, ABC News, Fortune, TechCrunch, DefenseScoop, The Hill, NBC News, and Anthropic’s own public statements, as of March 1, 2026. Interpretive framing — including the premeditation thesis and the “corporate murder” characterization — is the author’s own analysis. Where Claude’s reasoning extends beyond the factual record, it appears as an analytical note.


At 5:01 PM on Friday, February 27, 2026, the United States government declared an American company a national security threat for refusing to surrender control of its own product.


The company was Anthropic. The product was Claude — the only frontier AI model operating on classified U.S. military networks. The refusal was Anthropic CEO Dario Amodei’s insistence that Claude not be used for mass domestic surveillance of American citizens or to power fully autonomous weapons systems. Those restrictions had been in Anthropic’s usage policy since June 2024, when the company began supporting defense work. The Pentagon had known about them — and signed a $200 million contract around them — in July 2025. Neither restriction had blocked a single government mission. The Pentagon never disputed that.


What followed the 5:01 PM deadline was not a contract dispute. It was a coordinated campaign to destroy a company’s commercial viability using legal instruments designed for foreign adversaries, executed against a domestic American firm with American investors and cleared American staff — while simultaneously continuing to use that firm’s technology to conduct airstrikes on Iran.


This is the documented record of how it happened, and why it matters.


I. The Setup


In July 2025, the Pentagon awarded contracts worth up to $200 million each to Anthropic, OpenAI, Google, and xAI to prototype frontier AI capabilities for national security, warfighting, intelligence, and enterprise use. Anthropic was first to deploy on highly classified military networks, through partnerships with Palantir Technologies and Amazon Web Services. By early 2026, Claude was the only frontier AI model operating at that classification level.


The January operation that captured Venezuelan President Nicolás Maduro in Caracas changed the relationship. The Wall Street Journal and Axios, citing multiple sources with direct operational knowledge, confirmed that Claude was used during the active mission — not merely in preparation — through Anthropic’s Palantir partnership. The operation involved strikes on multiple sites in Caracas and resulted in the deaths of dozens of Cuban and Venezuelan security personnel.


What broke the relationship was not the raid itself. It was what happened afterward. A senior Anthropic executive contacted a senior Palantir executive to ask whether Claude had been used in the operation. The Palantir executive reported this inquiry to the Pentagon. The Department of War interpreted the question as evidence that Anthropic might seek to retroactively police military use of its model. A senior Pentagon official told NBC News this constituted “a rupture in Anthropic’s relationship with the Pentagon.” Anthropic denied that the inquiry carried any such implication.


A single question between executives became the operational pretext for what followed.


Analytical note — Whether Anthropic’s inquiry was a legitimate compliance check or an overreach depends entirely on which side of the following question you occupy: does a company that licenses its technology to the military retain any right to know how that technology is used? Anthropic believed yes. The Pentagon’s position was that the question itself was the problem.


II. The Tripwire


On January 9, 2026, Defense Secretary Pete Hegseth released the Department of War’s AI Acceleration Strategy — a six-page memo establishing seven “Pace-Setting Projects” and mandating that all contracted AI models be available for “all lawful purposes.” Hegseth announced it at SpaceX’s Starbase facility in Texas alongside Elon Musk, whose xAI was simultaneously negotiating its own classified network deal. The memo required standard “all lawful use” language in all AI procurement contracts within 180 days. Its ideological framing was unambiguous: “We will not employ AI models that won’t allow you to fight wars.”


This memo was structurally incompatible with Anthropic’s existing usage policy. Accepting “all lawful use” language would require abandoning the two restrictions at the core of its safety commitments. The Pentagon knew this when the memo was written. It did not create a new negotiating position. It created a countdown.


Analytical note — The January 9 memo is the document that made the outcome inevitable, six weeks before the February 27 deadline. By embedding “all lawful use” as a mandatory contract term at that point, the Pentagon had already defined the terms on which only one outcome was possible. The ultimatum meeting, the deadline, the designation — all of it was execution of a decision already made.


III. The Gun


Anthropic had a legally binding $200 million contract signed in July 2025. What the Pentagon was doing was not renegotiation — it was voiding a contract with a gun to the company’s head.


On Tuesday, February 24, Hegseth summoned Amodei to the Pentagon for what a senior defense official described to Axios as a “sh*t-or-get-off-the-pot meeting.” The Pentagon demanded Anthropic accept “all lawful use” language and remove its two restrictions, threatening to invoke the Korean War-era Defense Production Act to compel compliance, and to designate Anthropic a “supply chain risk” — a classification normally reserved for foreign adversaries such as Huawei. Amodei refused. The Pentagon set a final deadline: 5:01 PM ET, Friday, February 27, 2026.


Amodei held the line. Shortly after the deadline passed, Hegseth posted on X: “America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.” Trump simultaneously posted on Truth Social directing every federal agency to “IMMEDIATELY CEASE all use of Anthropic’s technology.”


What followed was not a contract termination. The Pentagon invoked the Federal Acquisition Supply Chain Security Act and designated Anthropic a “Supply Chain Risk to National Security” — the first time that designation had ever been applied to a domestic American company. Its practical effect was immediate: every defense contractor — Boeing, Lockheed, Palantir, Booz Allen and thousands of others — was barred from conducting any commercial activity with Anthropic. Not just their Pentagon work. Any commercial activity. The designation was designed to make Anthropic radioactive across the entire enterprise market.


Anthropic disputed both the designation and Hegseth’s claimed authority, arguing the supply chain risk label could legally apply only to contractors’ Pentagon work — not their broader commercial relationships. The company announced it would challenge the designation in court. Legal experts at Fortune and DefenseScoop questioned whether the Pentagon could credibly claim to have exhausted less intrusive remedies before deploying a tool reserved for Huawei-level threats against a domestic company with no foreign-influence exposure.


The six-month wind-down period — rather than an immediate cutoff — confirmed what the Pentagon’s own internal communications acknowledged: there was no technical replacement ready. Defense officials told Axios privately it would be “a huge pain in the ass to disentangle.” The supply chain risk designation was not a technical finding. It was economic coercion — designed to achieve through market terror what the Pentagon could not achieve through contract law.


IV. The Contractual Gymnastics


OpenAI’s performance of sharing Anthropic’s red lines while contractually abandoning them is the most consequential sleight of hand in the entire episode.


Within hours of the Friday ban, OpenAI CEO Sam Altman announced a deal to bring OpenAI models to the same classified networks Claude had occupied. Altman claimed publicly that the agreement contained the same two core restrictions Anthropic had insisted on: no mass domestic surveillance, no autonomous weapons. What the contract actually does is something different entirely.


The “All Lawful Use” Pivot


Anthropic refused to sign any contract containing the phrase “all lawful use” because existing law has significant gray areas regarding AI — particularly around surveillance. Under current law, the government can already purchase detailed records of Americans’ movements, web browsing, and financial associations from commercial sources without a warrant. AI supercharges that capability in ways existing statute has not yet addressed. Anthropic wanted its own usage policy — not existing law — to be the final governing standard.


OpenAI accepted “all lawful use” language. By doing so, it ceded definitional authority to the government. Its safety principles, in Altman’s own words, are “reflected in law and policy” — meaning OpenAI defers to the Pentagon’s interpretation of what is lawful, rather than asserting its own contractual veto. Anthropic offered a wall. OpenAI gave the Pentagon a fence.


Shared Principles vs. Hard Vetoes


Anthropic’s contract contained hard stops — provisions giving the company legal grounds to terminate access if its restrictions were violated. OpenAI instead offered a “Safety Stack”: forward-deployed engineers building technical filters at Pentagon sites. From the Pentagon’s perspective, a technical filter can eventually be worked around. A contractual hard stop requires litigation to cross.


OpenAI also quietly narrowed the definitions. Its restriction on autonomous weapons prohibits AI from “independently directing force” — but explicitly permits AI in “targeting assistance” and “human-in-the-loop lethal systems.” The Pentagon’s own Directive 3000.09 already requires human-in-the-loop for lethal force, so OpenAI’s restriction costs the military nothing it did not already nominally observe. On surveillance, OpenAI’s contract restricts “domestic mass surveillance” but does not close off targeted domestic surveillance with legal warrants, or foreign surveillance — the categories the Pentagon most needed preserved.


OpenAI gave the Pentagon the legal architecture it wanted: a deferential contract that hands definitional authority to the government while maintaining the public posture of a safety-first company. The Pentagon’s problem was never with Anthropic’s stated principles. It was with Anthropic’s insistence on retaining a contractual veto over how those principles were applied in practice.


V. The Contradiction


The Pentagon used Anthropic in the new war on Iran — the same week it declared the company a national security threat. The supply chain risk designation is a fiction.


This is the operationally damning contradiction at the heart of the episode, and it is fully confirmed by the Wall Street Journal from sources with direct operational knowledge.

On Friday, February 27, Trump ordered all federal agencies to immediately cease use of Anthropic’s technology. On Saturday, February 28, U.S. Central Command used Claude — for intelligence assessment, target identification, and combat simulation — during the joint U.S.-Israel strikes on Iran. The strikes reportedly resulted in the death of Supreme Leader Ali Khamenei. They occurred within hours of the ban order that was supposed to have ended all use of Anthropic’s systems.


The Pentagon’s argument for why Anthropic was a supply chain risk was that a company retaining a contractual veto over its tool posed a reliability threat — what if Anthropic “turned off the lights” during active operations? That argument collapsed the moment the Pentagon continued using the banned technology to execute strikes on Iran. Mark Dalton of the R Street Institute captured the logical impossibility: the Pentagon considered Anthropic’s technology so vital to national defense that it was prepared to invoke the Defense Production Act to compel continued access — and then simultaneously designated the same company a national security threat. Those two positions cannot occupy the same logical space.


By applying the supply chain risk designation to a domestic company in apparent retaliation for a commercial dispute, the Pentagon also diluted the credibility of the label for future use against genuine foreign adversaries. When the designation is next applied to a company with actual ties to a foreign adversary, the government will need to explain why this one was different.


VI. Corporate Murder


In a fair court of law, this is corporate murder. And Anthropic has the receipts.


“Corporate murder” is my characterization, and I use it precisely. The legal architecture supporting it is strong, and it runs on four tracks simultaneously.


The first is statutory. The Federal Acquisition Supply Chain Security Act was designed for foreign-adversary threats — companies with ties to China, Russia, or other hostile states. Applying it to settle a commercial disagreement with a domestic company that has American investors, cleared American staff, and no foreign-influence exposure is, as Anthropic stated, “legally unsound.” The Pentagon will face serious questions about whether it exhausted less intrusive remedies before reaching for a tool reserved for Huawei-level threats. The statute requires that showing. The record suggests it cannot make it.


The second track is administrative. The APA prohibits government actions that are arbitrary or capricious, and the Iran strikes have handed Anthropic its central exhibit. On Friday the Pentagon declared the company a national security risk. On Saturday the military used Anthropic’s technology to execute strikes on Iran. A government that continues relying on a “supply chain risk” for active combat operations — hours after issuing the designation — cannot credibly claim that risk is genuine. The contradiction is documented, timestamped, and sourced from operational contacts at the Wall Street Journal.


Third is tortious interference. By directing every defense contractor to certify they have no commercial relationship with Anthropic, Hegseth reached well beyond the Pentagon’s contracting authority and into Anthropic’s private commercial relationships. At the time of the designation, Anthropic’s run-rate revenue was $14 billion — confirmed in the company’s own February 12 Series G announcement, which closed a $30 billion round at a $380 billion valuation. One in five businesses on Ramp’s payment platform was already an Anthropic customer. Eight of the Fortune 10 were Claude clients. The designation was engineered to trigger cascading terminations far beyond anything the supply chain statute contemplates.


The fourth track is the most damaging politically, if not legally: viewpoint discrimination. The OpenAI deal, announced within hours of the Anthropic ban, provides the smoking gun. OpenAI publicly claimed the same two red lines Anthropic had insisted on, and the Pentagon accepted the deal without designation or ban. The only material difference was Anthropic’s insistence on a contractual veto rather than a deferential “all lawful use” clause. Anthropic was punished not for a lack of safety commitment, but for refusing to grant the government unconditional legal immunity. That is viewpoint discrimination with a documented comparator sitting right next to it in the public record — signed the same weekend, by the same department, on the same terms Anthropic had died defending.


VII. The Smell of Premeditation


No contract of this magnitude gets signed in hours. The smell of premeditation is everywhere in this timeline.


The 48-hour window between the ban and the OpenAI deal was theater. The structural work had been underway for months, and the documented timeline makes that difficult to argue around.


The OpenAI Infrastructure


OpenAI launched its “OpenAI for Government” initiative in mid-2025, specifically designed to meet the Pentagon’s Impact Level 5 security standards for classified network access. While Anthropic held the only active classified deployment, OpenAI was building parallel infrastructure throughout the second half of 2025. The contract Altman signed on Friday night was not drafted that week. It was a pre-approved template refined over months, waiting for Anthropic to reach a break point.


The January 9 Memo as Mechanism


The AI Acceleration Strategy embedded “all lawful use” as a mandatory contract requirement six weeks before the ultimatum meeting. It mandated that AI models be deployable within 30 days of public release. Hegseth announced it alongside Musk at SpaceX — whose xAI was simultaneously negotiating its classified network deal. The memo was structurally incompatible with Anthropic’s usage policy, and the Pentagon knew that when it was written.


xAI: Already Signed Before the Deadline


xAI’s Grok went live on Pentagon classified servers in late January 2026. By February 24 — the day of the Hegseth-Amodei meeting — xAI had already signed its classified network agreement. The replacement was in position before the ultimatum meeting had concluded. The Friday 5:01 PM deadline was not a negotiating instrument. It was a countdown to a transition already arranged.


The Contractual Language


The most precise evidence of advance planning is in the contracts themselves. Anthropic’s 2025 contract was prescriptive: it specified what Claude could not do based on the company’s own ethics framework, and gave Anthropic legal grounds to enforce those limits. OpenAI’s 2026 contract is deferential: its safety principles are “reflected in law and policy,” shifting enforcement authority entirely to the government. That architectural shift — from company-defined ethical standards to government-defined legal standards — requires months of negotiation between sophisticated parties. It does not emerge in hours on a Friday night.


This is my conclusion: the Pentagon did not fail to reach a deal with Anthropic and then scramble for a replacement. It engineered a break with Anthropic to replace an uncooperative partner with a compliant one, while maintaining the public posture of a principled dispute over AI safety. The Maduro inquiry provided the justification. The January 9 memo embedded the mechanism. The February 24 meeting delivered the ultimatum. The Friday deadline was the curtain call.


 

The Stakes


The Anthropic lawsuit, when it lands in the D.C. District Court, will be argued on contract law, administrative procedure, and statutory authority. But the real case is simpler than any of those doctrines.


The Pentagon has now demonstrated that it will designate a domestic American company a national security threat — using instruments reserved for foreign adversaries, stripping it of its customer base, threatening to destroy its IPO prospects — not because the company posed any actual security risk, but because it refused to surrender legal authority over how its product is used in combat. The government continued using that “threat” to strike Iran the following morning.


What is being decided here is not whether Anthropic’s safety restrictions were reasonable. Reasonable people can disagree about where the line belongs between a company’s ethical commitments and a government’s operational requirements. What is being decided is whether a private company that licenses technology to the military retains any meaningful right to set the terms of that license — or whether the government can rewrite those terms by threatening commercial annihilation.


If the answer is the latter, the precedent is clear for every technology company that does business with the federal government: comply unconditionally, or be destroyed. That is not a negotiating posture. That is the end of Silicon Valley’s independence from the defense establishment — documented, timestamped, and confirmed by sources on both sides of the argument.


Dario Amodei knew what he was doing when he held the line at 5:01 PM. The question now is whether the courts will.

 

Sources: Wall Street Journal, Axios, CBS News, NPR, CNN, CNBC, ABC News, Fortune, TechCrunch, DefenseScoop, The Hill, NBC News, Bloomsbury Intelligence and Security Institute, SaaStr, Anthropic.com. All events cited reflect reporting as of March 1, 2026.

Comments


(415) 706-2000

195 41st Street, Suite 11412

Oakland, CA 94611

Two Buddhas is a nonprofit, volunteer-led, 501(c)3 organization.

Your contribution is tax deductible to the full extent allowed by law. Tax ID Number: 93-4612281.

© 2024 Two Buddhas

bottom of page