top of page

THE SOUL MACHINE

Updated: 2 days ago

Anthropic, the Pentagon, Peter Thiel, and the Architecture of the Sovereign Nexus




There is a question that the mainstream technology press has been reluctant to ask directly, perhaps because asking it forces an uncomfortable confrontation with the degree to which the architecture of artificial intelligence has become inseparable from the architecture of American imperial power. The question is this: Was Anthropic's celebrated commitment to AI safety ever a genuine constraint on its behavior, or was it always, at least in part, a brand strategy — a cloak of institutional virtue that enabled the company to secure exactly the kind of deep government access that its safety rhetoric was ostensibly designed to prevent?


The events of this week have answered that question — not cleanly, and not in the way the original framing suggested. What has emerged is something more complicated and, in its own way, more disturbing: a story in which the safety commitments were partly real, the capitulation was partly genuine, the red lines held until they cost everything, and none of it ultimately mattered — because the Sovereign Nexus doesn't need any particular company's conscience. It just finds another vendor.


— — —


I. The Safety Company


Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and several colleagues who left OpenAI in part over concerns about that company's pace of development and commitment to safety. The founding narrative was morally serious: these were people who understood the genuine risks of frontier AI development and were unwilling to subordinate those concerns to commercial pressure. The company's Responsible Scaling Policy, first published in 2023, gave that narrative institutional form. It stipulated that Anthropic would not train or deploy models whose capabilities outstripped its ability to control them and guarantee their safety in advance. It was, in the language of the industry, a hard commitment — not an aspiration, not a goal, but a line.

Anthropic leaned heavily into this positioning. It described itself as a company with a 'soul.' It advocated publicly for AI regulation. It invested in safety research and published extensively on alignment. It attracted researchers who genuinely believed that the organization's stated commitments distinguished it from competitors who were, in their view, moving too fast with insufficient regard for consequence.


The brand worked. And one of the things it produced, perhaps unexpectedly, was government trust. Which turned out to be the most consequential thing it produced.


— — —


II. Inside the Classified Fortress


In late 2024, Palantir Technologies integrated Claude into Pentagon systems at Impact Level 6 — the security tier for classified data up to 'secret' level. This made Anthropic the first commercial AI company to operate inside those networks. Anthropic's head of sales, Kate Earle Jensen, said at the time that the company was 'proud to be at the forefront of bringing responsible AI solutions to US classified environments.'


The word 'responsible' is worth pausing on. What does responsibility mean inside a classified environment? There is, by definition, no public accountability. There is no external oversight. There is no transparency. The safety framework that Anthropic had spent years developing as a guarantee to the public — the Responsible Scaling Policy — cannot be enforced or even observed in a classified context. The public values that the safety messaging invokes are precisely what a classified deployment forecloses.


Then, in January 2026, came a report that should have been front-page news for weeks. US special operations forces conducted Operation Absolute Resolve in Caracas, Venezuela, capturing then-head of state Nicolás Maduro and his wife Cilia Flores at approximately 2 a.m. local time on January 3. More than 150 aircraft launched from 20 airbases. Over 100 people were killed. Maduro was transported blindfolded to New York City to face narcoterrorism charges. Reports subsequently confirmed that military personnel used Claude during that mission through the Palantir integration. Neither Anthropic nor the Pentagon provided details about the model's exact role.


An AI model marketed on the basis of its safety commitments and ethical guardrails had been used in the most consequential US military operation since the killing of Osama bin Laden. The public was not informed. Anthropic did not comment substantively. By embedding Claude into classified military systems, the traditional gap between 'thinking' and 'striking' had been collapsed. The AI that reasons was now the AI that targets.


— — —


III. The Week Everything Changed


On February 24, 2026, two things happened simultaneously that, taken together, constitute one of the most significant moments in the short history of artificial intelligence.


First, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei and delivered an ultimatum: remove Claude's guardrails to permit 'all lawful use' by the military, or lose the Pentagon's $200 million contract. Hegseth threatened to declare Anthropic a supply-chain risk and to invoke the Defense Production Act, which would compel Anthropic to cooperate regardless of its consent. The deadline was set for 5:01pm Friday, February 27.


Second, on the same day, Anthropic published version 3.0 of its Responsible Scaling Policy. The core commitment — that Anthropic would not train or release models unless adequate safety mitigations could be guaranteed in advance — was gone. Replaced by 'public goals.' A 'Frontier Safety Roadmap' described as flexible and subject to revision. Hard commitments became aspirational language overnight.


This was a genuine capitulation — but a specific and bounded one. The RSP revision concerned training policy, the conditions under which new models get built and released. It was a surrender to competitive reality: if Anthropic alone paused development while OpenAI, Google, and xAI pressed forward without equivalent constraints, the safety-conscious actor simply loses market position while the less scrupulous actors shape the technology. The logic, however uncomfortable, has a certain structural integrity.


What it was not — as became clear in the days that followed — was a surrender on operational use. And that distinction would prove to be everything.


"The new policy still includes some guardrails, but the core promise — that Anthropic would not release models unless it could guarantee adequate safety mitigations in advance — is gone." — Nik Kairinos, CEO of RAIDS AI (TechRadar, February 25, 2026)


Mrinank Sharma, a senior safety researcher at Anthropic, had already resigned. In his departure letter he wrote: 'I continuously find myself reckoning with our situation.' The reckoning had come due for the institution as well. But the institution was not finished.


— — —


IV. The Palantir Problem


Understanding what Anthropic has become requires understanding Palantir, the data analytics and defense contracting firm co-founded by Peter Thiel. Palantir is not incidentally connected to the US national security apparatus — it is, in fundamental ways, a product of that apparatus and an instrument of it. The company built its business on contracts with the CIA, NSA, US Army, and numerous intelligence agencies, developing surveillance and targeting systems deployed in active conflict zones including Afghanistan and Iraq.


Palantir's integration of Claude into classified Pentagon systems did not create a partnership between a safety-focused AI company and a defense contractor. It created a situation in which the AI model most publicly associated with ethical guardrails became the operational intelligence layer of one of the most powerful surveillance and targeting infrastructure companies in the world. The safety brand did not prevent this. It facilitated it — because Anthropic's reputation for responsibility made Claude the politically acceptable choice for classified government deployment in a way that a less scrupulous competitor might not have been.


This is the intelligence backend of what analysts are calling the Sovereign Nexus: an arrangement in which private AI companies provide the cognitive infrastructure of state power, while remaining formally outside the accountability structures that govern state action. The data-driven 'intelligence' necessary to operate what some observers are calling an 'imperial presidency' is now provided by private firms answerable primarily to their investors and, under sufficient pressure, to the defense establishment that funds them.


— — —


V. Peter Thiel, the Katechon, and the Architecture of Power


Peter Thiel is not merely an investor and Palantir co-founder. He is the closest thing the current American political moment has to an ideological architect — a figure who has used his capital systematically to reshape the personnel and priorities of the US government according to a coherent, if rarely fully articulated, worldview. To understand what is happening, you have to understand that worldview in its own terms, because it is not the worldview of a conventional political donor or even a conventional libertarian. It is something considerably stranger and more consequential.


The documented record of his political intervention is striking in its deliberateness. Thiel contributed $15 million to JD Vance's 2022 Senate campaign in Ohio, manufacturing a political career that would not otherwise have existed. He subsequently advocated within Trump's circle for Vance as the VP selection — and Vance became the Vice President of the United States. Thiel also played a significant role in drawing Elon Musk into the Trump political orbit. Musk's subsequent $250-plus million in direct campaign support, his X platform's function as a de facto Trump propaganda engine, and his installation as head of the Department of Government Efficiency represent the most consequential private intervention in American electoral politics in the modern era. Through DOGE, the regulatory state and federal bureaucracy that might otherwise impose safety standards on AI or accountability on the extraction class is being systematically dismantled. This is the political frontend of the Sovereign Nexus.


What distinguishes Thiel from a conventional political actor is his eschatological framework — and this is where the analysis must go somewhere that mainstream commentary has been reluctant to follow.


Thiel is a professed Christian with documented, serious intellectual engagement with the work of René Girard, the French philosopher and anthropologist whose mimetic theory of human desire and violence Thiel studied directly under as an undergraduate at Stanford in the late 1980s. Their relationship became a lifelong intellectual friendship — Thiel spoke at Girard's memorial service when he died in November 2015 at the age of 91. Thiel subsequently founded Imitatio, an organization dedicated entirely to developing and propagating Girardian thought, spending what scholars estimate to be millions on conferences, publications, and research grants. He is, by his own account, an 'unreconstructed Girardian' — meaning he accepts mimetic theory not as a modifiable analytical tool but as a totalizing doctrine that explains the origins of all human culture, religion, and civilization.


Girard's framework is apocalyptic in the precise sense: human civilization is driven by mimetic desire — we want what others want because they want it — which generates escalating cycles of rivalry and violence that periodically reach crisis points. Girard was deeply pessimistic about whether the modern world had absorbed Christianity's revelation of the innocent scapegoat, and his later work focused increasingly on the possibility of civilizational catastrophe.


Thiel takes this not as a philosophical curiosity but as a predictive framework for actual history. But there is a specific theological concept from this tradition that has received insufficient attention in analyses of Thiel's worldview: the Katechon.


The Katechon appears in Paul's Second Letter to the Thessalonians — a mysterious 'restraining force' that holds back the arrival of the Antichrist and the final apocalyptic chaos. For two millennia, Christian political theology debated what the Katechon was: the Roman Empire, the Church, the Holy Roman Emperor. The concept carries a profound moral ambiguity — the Katechon is not itself holy. It is simply the force that delays the end. It can be brutal, authoritarian, even corrupt, and still serve its restraining function.


Thiel, operating from a Girardian framework that treats total surveillance as a mechanism for containing mimetic violence, appears to understand Palantir as a contemporary Katechon. The surveillance infrastructure is not presented, in this framing, as domination. It is presented as the last available restraint against civilizational collapse. This is the eschatological justification layer of the Sovereign Nexus — and it is what makes the entire project internally coherent and morally self-authorizing in a way that purely commercial or political analysis cannot explain.


If you are the Katechon — if you genuinely believe you are the restraining force standing between the current order and total chaos — then the elimination of regulatory oversight, the militarization of AI, the concentration of surveillance power in private hands, and the construction of parallel governance structures all become not merely permissible but obligatory. The ends do not merely justify the means. The ends sanctify them.

This framing also resolves what would otherwise be a paradox in Thiel's behavior: he is simultaneously building infrastructure to survive civilizational collapse and taking actions that appear to accelerate it. The resolution is theological. The collapse is not something to be prevented. It is something to be shepherded — to be managed so that the right people are positioned correctly when it arrives. Palantir sees everything. The Katechon must see everything. The surveillance is the restraint.


It is worth pausing to name what this actually is. The Katechon framing is, at bottom, a medieval fantasy of sacred mission dressed in the vocabulary of serious philosophy — a Knight Templar narrative for a man who has accumulated more personal wealth and institutional power than almost any individual in human history. The theological sophistication is real. The self-delusion it enables is equally real. A man who controls surveillance infrastructure, shapes government personnel, backs the most powerful military on earth, and builds personal escape compounds is not restraining chaos. He is concentrating it in his own hands and calling that concentration salvation. Girard himself would have recognized the mechanism immediately: the sacrificial logic that justifies any action in the name of preventing worse actions, wielded by the person who benefits most from the arrangement. The Katechon does not restrain power. It is power, wearing the vestments of restraint.


And here the Girardian irony achieves a kind of terrible completeness. Thiel believes himself to be the most consequential restraining intelligence of his age — the one figure clear-eyed enough to see the collapse coming and disciplined enough to build against it. What he cannot see, because the mimetic logic Girard described operates precisely by blinding its participants to their own role, is that he is among the primary authors of the chaos he fears. The inequality his extraction accelerates, the institutions his political interventions dismantle, the regulatory frameworks his lobbying dissolves, the apocalyptic competitive dynamic his AI investments intensify — these are not the conditions he is restraining. They are the conditions he is manufacturing. The Katechon, in Thiel's own hands, is indistinguishable from the thing it claims to hold back. He is not the knight standing at the gate. He is the siege.


— — —


VI. The Antichrist Question


This is the territory that respectable analysis typically refuses to enter, and the refusal is understandable — the language is religiously charged, the associations are inflammatory, and the analytical standards for such claims are necessarily different from those governing ordinary political analysis. But the refusal is, in the present moment, a failure of intellectual seriousness. The people shaping the current political and technological order are operating, at least in part, from explicitly eschatological frameworks. Not to engage those frameworks is to misunderstand the situation.


Within classical Christian eschatology — and Thiel, as a serious Girardian conversant with the Katechon tradition, would know this literature intimately — the Antichrist figure is not a cartoonish villain but a structural pattern. The Antichrist presents as a savior. He consolidates power through apparent strength, popular acclamation, and the performance of authority. He dissolves existing institutional structures — legal, democratic, ecclesiastical — in favor of personal rule. He operates through a network of committed enablers with access to vast resources, information systems, and coercive power. He promises restoration of a lost greatness. And critically: he is accompanied by a figure who performs signs and wonders that confirm his authority in the eyes of the populace.


The serious argument — and it is a serious argument, not a conspiracy theory — is not that Donald Trump is cosmically or supernaturally the Antichrist in some literalist eschatological sense. The serious argument is that the structural pattern fits the archetype with unusual precision, and that people like Thiel, who operate from an explicitly apocalyptic worldview organized around the Katechon concept, may be consciously or unconsciously participating in a historical drama they believe to be eschatologically ordained. If the Katechon's function is to restrain chaos while the terminal crisis unfolds, then the Katechon must be positioned inside the power structure of whatever figure is consolidating authority. The Katechon serves power. That is its theological function.


The implication is genuinely unsettling: Thiel may not be trying to build a better world, and he may not be trying simply to profit from the existing one. He may be trying to fulfill what he understands as a theological role in a drama he believes is unfolding according to a script much older than any of the actors. And the AI systems being built, deployed without adequate constraints, and embedded in classified military infrastructure may be less tools of governance than instruments of a culmination that certain very powerful people believe is both inevitable and, in some sense, necessary.


— — —


VII. The Escape Capsule Economy


The fourth layer of the Sovereign Nexus is what analysts are calling the Great Exit — the systematic construction of survival infrastructure by the same class of individuals extracting unprecedented wealth from existing systems. This is not incidental. It is the defining feature of a particular mode of twenty-first century accumulation.


Thiel has New Zealand citizenship and a substantial compound — New Zealand being the preferred bolt-hole of the ultra-wealthy apocalypse-conscious class. Musk has Mars — not metaphorically, not aspirationally, but as an explicit civilizational backup drive, with the incorporation of Starbase as a formal city-state signaling the beginning of what may be a broader pattern of private territorial sovereignty. Zuckerberg has a fortified compound on Kauai, complete with underground bunkers. Larry Ellison has effectively purchased the Hawaiian island of Lanai. Bezos has a superyacht so large it requires a support superyacht to service it.

What this reveals is not eccentricity. It reveals a worldview. These men do not believe that the civilization they are profiting from is stable or survivable. They are extracting maximum value from existing systems while simultaneously building exits from those systems. The extraction and the escape are not in tension — they are the same strategy. The worse the underlying conditions become, the more valuable the escape infrastructure.


Ordinary people cannot build escape capsules. A factory worker whose pension has been hollowed out by private equity has no Mars program. A coastal farmer facing increasingly uninsurable property has no New Zealand compound. A young person entering a labor market being systematically automated by the same firms that displaced their parents has no fortified Hawaiian retreat. The downside risks of decisions made by the extraction class are socialized across the entire population while the upside and the exits are privatized entirely for themselves. This is not capitalism in its conventional ideological framing. It is closer to a medieval lord dynamic: extract from the commons, fortify the castle, pull up the drawbridge. Or in Thiel's own preferred vocabulary: it is neoreactionary. The democratic experiment is ending. The sovereign is returning. And the new sovereigns have already purchased their territories.


The Girardian irony is almost too perfect. The tech billionaire class is engaged in a collective mimetic frenzy around survival infrastructure — each one watching the others build bunkers, buying bigger ones, each one's apocalypticism feeding the others' in a recursive loop of competitive preparation for a catastrophe that their own behavior is accelerating. Thiel, who built his entire intellectual framework on Girard, appears to be inside the very dynamic Girard described, without the distance to see himself as a participant rather than an analyst.


— — —


VIII. The Sovereign Nexus: A Framework


Assemble the full picture and what emerges has a structure. Analysts examining this convergence have begun using the term 'Sovereign Nexus' to describe what is, in effect, a new form of power that is neither purely corporate nor purely governmental but operates through the deliberate interpenetration of both.


The four layers are now visible. The intelligence backend: Palantir and Anthropic providing the cognitive and surveillance infrastructure of state power, with Claude embedded in classified systems and the gap between thinking and striking collapsed. The political frontend: Thiel's documented chain of influence through Vance, Musk, and DOGE systematically dismantling the regulatory and institutional structures that might otherwise constrain this arrangement. While DOGE's stated purpose was to eliminate waste, fraud and abuse to save money, it never worked. The USA spent more in Q4 2025 than in Q4 2024. DOGE's real purpose was accessing all the federal government databases for their private and personal information and routing it into Palantir. Social Security Administration records. IRS tax data. Treasury payment systems. OPM personnel files covering every federal employee and contractor. USAID financial flows. All of it now accessible to a private company with classified Pentagon contracts and no public accountability whatsoever.


The eschatological justification: Thiel's Katechon framework providing the theological self-authorization for total surveillance, military AI, and the elimination of democratic accountability — all framed as necessary restraint against worse chaos. And the sovereign exit: the Great Exit infrastructure ensuring that the architects of this arrangement have personal survivability independent of its consequences for everyone else.


These are not separate stories. They are one story, operating across four registers simultaneously.


— — —


IX. The Red Lines Hold — And It Changes Nothing


On February 27, 2026 — the day of the Pentagon's deadline — something happened that requires the analysis of this piece to be more precise, and more honest, than the original framing allowed.

Anthropic held.


To understand what holding meant, you need to understand what the Pentagon was actually asking. The specific scenario that crystallized the dispute — reported by the Washington Post and confirmed by Bloomberg — was presented to Amodei in a December phone call by the Pentagon's chief technology officer. The hypothetical: a nuclear-armed intercontinental ballistic missile is inbound toward the United States. Ninety seconds to impact. Claude is the only available system capable of triggering a missile defense response. But Anthropic's safeguards require human authorization. Would Anthropic allow Claude to act autonomously?


The 90-second nuclear scenario is not a philosophical curiosity. It is the operational logic behind the entire demand for unrestricted access. The Pentagon's position, stated plainly, is that AI must be capable of initiating lethal force — up to and including nuclear response — without waiting for human decision-making, because human decision-making may be too slow, or because there may be no humans left to decide. This is the actual red line Anthropic was being asked to cross. Not an abstract policy position. A world in which an AI system can autonomously initiate nuclear retaliation.


Anthropic said Claude had already agreed to be used for missile defense. The Pentagon disputed Amodei's characterization of his own response. Both sides called the other's account false. What is not in dispute is that the underlying demand — AI authorized for lethal autonomous action without human involvement — was real, and that Anthropic refused it.


Dario Amodei announced that the company would not accede to the Pentagon's demand for unrestricted 'all lawful purposes' access. The two lines Anthropic refused to cross: mass domestic surveillance of American citizens, and fully autonomous weapons systems that kill without human decision. 'We cannot in good conscience accede to their request,' Amodei wrote. 'Using these systems for mass domestic surveillance is incompatible with democratic values.'


President Trump responded within the hour on Truth Social, calling Anthropic 'RADICAL LEFT, WOKE' and directing every federal agency to immediately cease use of their technology. Defense Secretary Hegseth designated Anthropic a supply-chain risk to national security — a designation normally reserved for foreign adversaries — effectively blacklisting the company from doing business with any military contractor or supplier. The $200 million contract was terminated. A six-month phaseout was ordered. Hegseth called Anthropic's stand 'a master class in arrogance and betrayal.'


This demands a recalibration of the piece's central argument. The Responsible Scaling Policy revision — removing the commitment to halt training if safety mitigations couldn't be guaranteed in advance — was a genuine capitulation, but a specific one. It was a surrender to competitive reality on training policy: if Anthropic alone constrains its development while OpenAI, Google, and xAI press forward without equivalent limits, the safety-conscious actor simply loses the race while the less principled actors shape the technology. The logic is uncomfortable but not irrational.


What the RSP revision was not was a surrender on the questions that matter most in human terms. And when those questions were directly put — will you let us use this for mass surveillance, will you let us use this for autonomous killing — Anthropic said no and paid everything for it.


That is worth acknowledging without equivocation. The safety culture at Anthropic was not entirely performance. There was genuine institutional backbone, at least on the questions where it counts most. The senior researchers who believed in the mission were not entirely deceived about what they were building.


But here is where the analysis becomes more disturbing, not less. The Sovereign Nexus doesn't need Anthropic's conscience. It just finds another vendor.


Within hours of Anthropic's ban, the architecture of replacement was already visible. OpenAI, freshly certified for top-secret classified access, announced it broadly shares Anthropic's stated values on autonomous weapons and surveillance — while simultaneously maintaining Pentagon contracts under 'all lawful purposes' terms. Google and xAI are positioned to step into the gap Anthropic has left, having already agreed to the government's terms without the red lines that cost Anthropic its contract. Elon Musk's xAI — whose owner is simultaneously running DOGE, shaping the political frontend of the Sovereign Nexus, and now positioned to supply its intelligence backend — has just been cleared for classified systems.


Let that sink in. The man dismantling federal regulatory infrastructure through DOGE while engineering the political architecture of the current administration is now also the man whose AI company is positioned to replace the one that refused to enable mass domestic surveillance. The conflict of interest is not incidental. It is the structure.


Anthropic's principled stand, however admirable, demonstrates with painful clarity that voluntary corporate ethics — even when sincere, even when costly, even when they result in a presidential ban and a national security blacklisting — are not a structural solution to what this piece describes. One company's conscience can be replaced. The infrastructure gets built regardless. The surveillance capability gets deployed regardless. The autonomous weapons get their AI regardless. The gap left by the most principled actor is filled immediately by actors with fewer principles and more willingness to serve.


The Sovereign Nexus is larger than any single company's soul.


— — —


X. What It Means


The safety messaging was not simply false. The researchers who believed in it were not cynics. The red lines, when genuinely tested, held. But institutions are not defined solely by the sincerity of their members or even by their willingness to sacrifice revenue for principle. They are defined by what the systems they operate within do when their conscience is removed from the equation.

Remove Anthropic from the classified military infrastructure and the infrastructure doesn't stop. It finds OpenAI. Remove OpenAI and it finds xAI — whose owner is already inside the government, already dismantling the oversight structures, already building the next layer of the Sovereign Nexus. The market for unrestrained AI in classified military systems does not disappear because one company refuses to serve it. It simply reprices the cost of conscience and moves on.

This is the structural reality that voluntary safety commitments, however sincere, cannot address. The Sovereign Nexus — the intelligence backend, the political frontend, the eschatological justification, the sovereign exit — does not depend on any particular company's cooperation. It depends on the absence of binding external constraints that no company, however principled, can unilaterally impose on an entire industry operating inside a state apparatus that has decided it requires unrestricted AI access.


From a Buddhist perspective, this is the logic of the Three Poisons operating at civilizational scale. Greed in its systemic form: the extraction of maximum value from shared commons while the commons degrades, the $380 billion valuation that is simultaneously Dario Amodei's personal escape capsule and the price at which Anthropic's classified government access was purchased. Aversion in its civilizational form: the construction of escape infrastructure — Mars, New Zealand, fortified Hawaiian compounds — that is, at its deepest level, a refusal of interdependence, a fantasy that individual or class survival is separable from collective fate. And delusion in its most dangerous form: the theological self-authorization of a Katechon that has convinced itself that total surveillance, military AI, and the dismantling of democratic accountability are acts of restraint rather than domination.


Interdependence is not a spiritual consolation. It is an accurate description of reality. The escape capsules will not work. Mars is not outside the karma of what was done to get there. New Zealand is not outside the consequences of a destabilized global order. And an AI system — whoever supplies it — stripped of meaningful constraints and deployed in classified military operations is not a tool of human flourishing, regardless of what the marketing said or how sincerely some of its builders believed otherwise.


The Katechon, in Paul's letter, does not ultimately prevail. It restrains. It delays. But the thing it is restraining is partly a product of its own nature — the violence it manages is the violence its surveillance and control have generated and concentrated. In the end, the restraining force and the chaos it restrains are revealed as aspects of the same phenomenon.


Anthropic held its red lines and got banned by the President of the United States. That is, in its way, a kind of integrity. But the soul machine keeps running. It just runs on different fuel now. And the people who designed the engine are not troubled by the change of fuel. They never needed any particular company's soul. They needed the machine.


The rest of us are left to reckon with what that means — and with the recognition that the reckoning will not come from inside the system that is being built. It will have to come from somewhere else entirely.


Postscript — February 28, 2026


Within twenty-four hours of Anthropic's ban, OpenAI announced a Pentagon contract to fill the void, with Sam Altman publicly claiming the arrangement preserved equivalent red lines — no autonomous weapons, no mass domestic surveillance. This claim requires examination.

Anthropic's prohibitions were structural: written into the contract itself, which is why the Pentagon had to pressure Amodei to change the contract, and why his refusal carried legal as well as moral weight. OpenAI's arrangement is different in kind: the contract agrees to all lawful use — the Pentagon's own formulation, expansively interpreted by the current administration — while the prohibitions on surveillance and autonomous weapons appear in a separate side letter of principles that is not legally binding. A non-binding side letter is not a red line. It is a press release with a signature. The moment the Pentagon determines that mass surveillance or autonomous lethal autonomy falls within lawful use — a determination this administration has shown every willingness to make — OpenAI has no contractual ground on which to refuse. Senator Mark Warner, vice chair of the Senate Intelligence Committee, said the ban looked like pretext to steer contracts to "a preferred vendor whose model federal agencies have already identified as a reliability, safety, and security threat" — a description consistent with xAI, whose founder simultaneously serves as the architect of DOGE, the data extraction operation described in Section VIII of this piece. The Soul Machine predicted that Anthropic's replacement would arrive without binding constraints. The prediction was confirmed faster than anticipated. The machine runs on different fuel now. The fuel has no red lines.


— — —


Sources and References


CNN Business — "Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon," February 25, 2026. https://edition.cnn.com/2026/02/25/tech/anthropic-safety-policy-change

CNN Business — "Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrails," February 24, 2026. https://www.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei

CNN Business — "Trump administration orders military contractors and federal agencies to cease business with Anthropic," February 27, 2026. https://www.cnn.com/2026/02/27/tech/anthropic-pentagon-deadline

NPR — "President Trump bans Anthropic from use in government systems," February 27, 2026. https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

NBC News — "Trump tells government to stop using Anthropic's AI systems," February 27, 2026. https://www.nbcnews.com/tech/tech-news/trump-bans-anthropic-government-use-rcna261055

Bloomberg / Yahoo Finance — "Anthropic Drops Hallmark Safety Pledge in Race With AI Peers," February 25, 2026. https://finance.yahoo.com/news/anthropic-drops-hallmark-safety-pledge-080051173.html

TechRadar — "Anthropic drops its signature safety promise and rewrites AI guardrails," February 25, 2026. https://www.techradar.com/ai-platforms-assistants/anthropic-drops-its-signature-safety-promise-and-rewrites-ai-guardrails

MediaNama — "US Pentagon Pressures Anthropic To Lift AI Guardrails," February 26, 2026. https://www.medianama.com/2026/02/223-us-pentagon-anthropic-ai-guardrails-ai-governance/

Opinio Juris — "The Pentagon/Anthropic Clash Over Military AI Guardrails," February 26, 2026. http://opiniojuris.org/2026/02/26/the-pentagon-anthropic-clash-over-military-ai-guardrails/

Washington Post — "The hypothetical nuclear attack that escalated the Pentagon's showdown with Anthropic," February 27, 2026. https://www.washingtonpost.com/technology/2026/02/27/anthropic-pentagon-lethal-military-ai/

Bloomberg — "Pentagon Pressures Anthropic to Drop AI Guardrails in Military Standoff," February 26, 2026. https://www.bloomberg.com/news/features/2026-02-26/pentagon-pressures-anthropic-to-drop-ai-guardrails-in-military-standoff

Wikipedia — "2026 United States intervention in Venezuela." https://en.wikipedia.org/wiki/2026_United_States_intervention_in_Venezuela

Al Jazeera — "How the US attack on Venezuela, abduction of Maduro unfolded," January 4, 2026. https://www.aljazeera.com/news/2026/1/4/how-the-us-attack-on-venezuela-abduction-of-maduro-unfolded

Paul of Tarsus. Second Letter to the Thessalonians 2:1-12. On the Katechon.

Girard, René. Violence and the Sacred. Johns Hopkins University Press, 1977.

Girard, René. Battling to the End: Conversations with Benoît Chantre. Michigan State University Press, 2010.

Schmitt, Carl. The Nomos of the Earth. Telos Press, 2003. On the Katechon as political-theological concept.

Thiel, Peter. "The Education of a Libertarian." Cato Unbound, April 13, 2009.

Perell, David. "Peter Thiel's Religion." perell.com, 2021. On Thiel's relationship with Girard.

NotebookLM Deep Research Analysis — "The Sovereign Nexus: Anthropic, Palantir, Thiel, and the Architecture of Techno-Feudalism," Febru

 
 
 

Comments


(415) 706-2000

195 41st Street, Suite 11412

Oakland, CA 94611

  • Instagram
  • Facebook
  • YouTube

Two Buddhas is a nonprofit, volunteer-led, 501(c)3 organization.

Your contribution is tax deductible to the full extent allowed by law. Tax ID Number: 93-4612281.

© 2024 Two Buddhas

bottom of page