Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Pentagon tech chief on Anthropic’s AI safety rules in Simple Termsand what it means for users..
What’s the story
The US Department of Defense’s Chief Technology Officer Emil Michael has publicly criticized AI company Anthropic.
The criticism comes after a dispute over the use of its technology in fully autonomous weapons.
Michael described Anthropic’s ethical restrictions on its chatbot Claude as an “irrational obstacle” to the military’s goal of increasing autonomy in armed drones and other machines.
Need for reliable AI partner
Michael stressed the importance of having a dependable partner in the field of AI technology.
He said, “I need a reliable, steady partner that gives me something, that’ll work with me on autonomous.”
This statement highlights his desire for collaboration with companies that can contribute to the development of autonomous systems without compromising national security interests.
Pentagon designates Anthropic as supply chain risk
The Pentagon has officially designated Anthropic as a supply chain risk.
This designation halts its defense work under a rule meant to protect national security systems from potential threats posed by foreign adversaries.
In response, Anthropic has vowed to sue over the designation, which could affect its business partnerships with other military contractors.
Trump orders halt to federal use of Claude
President Donald Trump has also ordered federal agencies to stop using Claude, giving the Pentagon six months to phase out the product.
This is because it is deeply embedded in classified military systems, including those used in the Iran war.
Anthropic has clarified that it only wanted its technology to be restricted from two high-level usages: mass surveillance of Americans or fully autonomous weapons.
Michael reveals details of talks with Amodei
Michael revealed his side of the months-long talks with Anthropic CEO Dario Amodei.
He said he had to give them scenarios, like a Chinese hypersonic missile example, to negotiate terms of service that were rational relative to their mission set.
This indicates a complex negotiation process where potential future scenarios were used as leverage in discussions about AI technology use in military operations.
‘Department of War, not private companies, makes military decisions’
In response to Michael’s podcast comments, Anthropic highlighted an earlier statement by Amodei, saying “Anthropic understands that the Department of War, not private companies, makes military decisions.”
This emphasizes the company’s position that it does not interfere with specific military operations or try to restrict its technology use on a case-by-case basis.
