Home News Trump Directs Federal Agencies to Expel Anthropic Over Military Guardrail Standoff

Trump Directs Federal Agencies to Expel Anthropic Over Military Guardrail Standoff

Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison)

WASHINGTON, D.C. — In an unprecedented escalation of the conflict between the Silicon Valley elite and the federal government, President Donald Trump on Friday ordered all executive agencies to immediately cease the procurement of Anthropic technology and begin a comprehensive phase-out of the company’s AI systems.

The directive, issued via Truth Social just an hour before a 5:01 p.m. ET deadline set by the Pentagon, marks the most severe retaliation against a domestic technology company in the history of American artificial intelligence policy. The President blasted the company as being run by “Radical Left” activists, signaling a permanent rupture between his administration and the safety-focused AI lab.

The Standoff at the Pentagon

The crisis reached a breaking point this week following months of simmering tension between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth. At the heart of the dispute were two “bright red lines” Anthropic refused to cross: the use of its Claude AI models for mass domestic surveillance of American citizens and the development of fully autonomous lethal weapons systems.

While the Department of Defense (DOD)—frequently referred to by the administration as the “Department of War”—argued that it has no legal intention of conducting illegal surveillance, it demanded “unrestricted access” to the technology for all “lawful purposes.” Anthropic countered that the administration’s contract language included “legalese” that would allow safety guardrails to be disregarded at the government’s whim.

Faith Based Events

“We cannot in good conscience accede to this request,” Amodei stated in a defiant blog post on Thursday. “Anthropic was founded on the principle that frontier AI must be built and deployed responsibly. Giving any government the ability to bypass safety protocols is a risk we are not willing to take.”

The Executive Order and the “Supply Chain Risk”

President Trump’s response was swift and total. The new directive mandates:

  • Immediate Cessation: All federal agencies must stop using Anthropic’s Claude models for new projects effectively immediately.
  • Six-Month Phase-Out: Agencies with deeply embedded Anthropic technology, specifically the Pentagon’s classified networks where Claude is currently the only frontier model cleared for Secret-level work, have six months to migrate to alternative providers.
  • Criminal Consequences: The President warned that the company could face “major civil and criminal consequences” if it does not cooperate fully during the transition period.

Administration officials, including Under Secretary of Defense for Research and Engineering Emil Michael, have also suggested the company could be designated a “supply chain risk.” This label, typically reserved for foreign adversaries like Huawei or TikTok, would effectively bar any private defense contractor from using Anthropic’s tools, potentially crippling the company’s commercial growth ahead of its highly anticipated IPO.

A Pivot to “Grok” and Other Rivals

The sudden vacancy left by Anthropic is expected to be filled rapidly by its competitors. Elon Musk’s xAI, which develops the Grok model, has been positioned by the administration as the preferred alternative. Musk, a close ally of the President, has frequently criticized Anthropic’s “Constitutional AI” approach as being “woke” and “anti-human.”

Industry analysts suggest that OpenAI and Google may also see an influx of federal business, provided they comply with the administration’s demands for unrestricted military utility. However, the purge of Anthropic sends a chilling message to the entire sector: in the “America First” AI race, corporate ethical guidelines are secondary to state directives.

The Economic and Ethical Fallout

The move has sent shockwaves through the AI industry. Anthropic, recently valued at $30 billion, now faces an existential threat to its revenue streams. While the company has seen “hockey-stick growth” in the private sector with its Claude Code product, the loss of government contracts—and the potential “supply chain” blacklisting—could alienate enterprise customers who fear being caught in the political crossfire.

Civil liberties groups have praised Anthropic’s stance. The Electronic Frontier Foundation (EFF) issued a statement noting that “AI companies have a responsibility to resist unlawful bulk surveillance,” while critics of the administration argue that the President is creating a “regulatory vacuum” where safety is sacrificed for “military AI dominance.”

As the six-month clock begins for the Pentagon to strip Claude from its systems, the battle lines for the future of AI governance have been clearly drawn. It is no longer a question of if AI will be used in war, but whether the companies that build it have any right to say how.


Sources and Links


Disclaimer

Artificial Intelligence Disclosure & Legal Disclaimer

AI Content Policy.

To provide our readers with timely and comprehensive coverage, South Florida Reporter uses artificial intelligence (AI) to assist in producing certain articles and visual content.

Articles: AI may be used to assist in research, structural drafting, or data analysis. All AI-assisted text is reviewed and edited by our team to ensure accuracy and adherence to our editorial standards.

Images: Any imagery generated or significantly altered by AI is clearly marked with a disclaimer or watermark to distinguish it from traditional photography or editorial illustrations.

General Disclaimer

The information contained in South Florida Reporter is for general information purposes only.

South Florida Reporter assumes no responsibility for errors or omissions in the contents of the Service. In no event shall South Florida Reporter be liable for any special, direct, indirect, consequential, or incidental damages or any damages whatsoever, whether in an action of contract, negligence or other tort, arising out of or in connection with the use of the Service or the contents of the Service.

The Company reserves the right to make additions, deletions, or modifications to the contents of the Service at any time without prior notice. The Company does not warrant that the Service is free of viruses or other harmful components.