Tech Explained: Trump orders federal agencies to stop using Anthropic AI technology ‘immediately’ amid Pentagon dispute  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Trump orders federal agencies to stop using Anthropic AI technology ‘immediately’ amid Pentagon dispute in Simple Termsand what it means for users..

US President Donald Trump on Friday (Febraury 27) ordered all federal agencies to immediately stop using technology from AI company Anthropic, escalating a public feud between the administration and the artificial intelligence lab over military use of AI systems.

In a post on Truth Social, Trump declared that the US military will not be dictated to by a private tech firm, writing: “THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!”

He added that the decision rests with the Commander-in-Chief and ordered agencies to halt use of Anthropic’s products, introducing a six-month phase-out period for departments currently using the company’s technology.

Trump warned of consequences if the company does not cooperate, stating: “Anthropic better get their act together… or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”

Pentagon–Anthropic clash over AI safeguards

The directive comes amid tensions between Anthropic and the Pentagon over safeguards governing military use of AI.

Anthropic CEO Dario Amodei drew a firm line, saying the company “cannot in good conscience accede” to demands for unrestricted deployment of its technology.

The company, maker of the chatbot Claude, has argued that it sought assurances that its systems would not be used for mass surveillance of Americans or fully autonomous weapons. In a statement, Anthropic said new contract language was “framed as compromise [but] paired with legalese that would allow those safeguards to be disregarded at will.”

Supply Chain Risk warning

Military officials had warned that if Anthropic did not comply, it could be designated a “supply chain risk,” a classification typically applied to foreign adversaries and one that could disrupt corporate partnerships.

Defense officials also referenced potential use of the Defense Production Act, a Cold War-era law that could expand government authority over production in the interest of national security.

Pentagon spokesperson Sean Parnell said the military seeks lawful use only, stating it has “no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”

He emphasized the Pentagon wants to “use Anthropic’s model for all lawful purposes.”