Tech Explained: Trump banned Anthropic — hours later, US military used its Claude AI in Iran strikes: Report  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Trump banned Anthropic — hours later, US military used its Claude AI in Iran strikes: Report in Simple Termsand what it means for users..

US government used the AI tools from Anthropic during the air attack launched on Iran just hours after declaring that it will stop using technology from the AI startup. As per a report by the Wall Streeet Journal, Commands around the world including U.S. Central Command in the Middle East used Anthropic’s Claude AI during the Iran attack.

Reportedly, the command used Anthropic’s AI for intelligence assessments, target identification and simulating battle scenarios. Prior to the Iran attack, another WSJ report had revealed that Anthropic’s AI was also used by the Pentagon during the capture of Venzuela president Nicolás Maduro.

The report noted that the use of Claude in high-profile missions is among the reasons why US administration had said that it would take six months to phase out the technology from the AI startup.

In a Truth Social post about ending the deal with Anthropic, US President Donald Trump had gone on to call the company ‘leftwing nut jobs’ and ‘woke’ while claiming that ‘their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.’

Trump had directed all federal agencies in the US to ‘immediately cease’ using Anthropic technology.

“We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels” he wrote

US and Anthropic feud over AI safety:

Pentagon and Anthropic had been arguing for months over how the company’s AI models are used in national defence. The AI startup said that it had allowed US DoD to use Anthropic for purposes with two exceptions: mass domestic surveillance of Americans and fully autonomous weapons.

Anthropic has also challenged US’ designation of the company as a ‘supply chain risk’ will be challenged in court.

“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.” the company wrote in a blogpost

“We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.” it added