Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: AI Warfare Is Outpacing Our Ability to Control It in Simple Termsand what it means for users..

An Iranian firefighter sits in front of destroyed vehicles located on the ruins of a building which was destroyed during the US-Israeli military campaign that struck a residential area on Monday, March 9, in Tehran, Iran on March 12, 2026. (Photo by Morteza Nikoubazl/NurPhoto via AP)
Public unease around the United States’ and Israel’s use of artificial intelligence in the war in Iran is growing. People are asking the questions that governments should have answered long before deploying these systems in combat: What role does algorithmic targeting play? Who bears responsibility when something goes catastrophically wrong, and innocent civilians are killed?
A troubling pattern is emerging. Governments are racing to integrate AI into warfare without fully understanding its accuracy, limitations, or consequences. While the incident has been blamed primarily on human error, the school bombing that killed nearly 200 children and teachers in Minab reinforces the danger of faulty intelligence being acted on quickly. Meanwhile, disputes between AI companies and governments over the military use of advanced systems, and the public backlash against those who support their use, reveal widespread concern. The US government’s attempted retaliation against Anthropic for trying to put guardrails around how its systems are used in warfare further underscores how the wrong issues are being emphasized by governments keen to find advantage on the battlefield.
This concern is entirely justified. We have already seen how, when targeting data is outdated, misinterpreted, or unverified, the consequences are unconscionable. American officials have already confirmed the use of AI technology to assist in airstrikes in Iran. Despite their claims of increased precision, unarmed civilians have been killed during these attacks. When the US was pressed for answers on reported AI-enabled strikes leading to civilian deaths in Iraq in 2024, the Department of War said it is not possible to determine if AI was used or not.
Military use of AI systems involved in targeting and the use of force poses major global security risks. Autonomous systems can cause unintended escalation and accidental conflict. Algorithmic error can cascade, undetected, leading to human operators making catastrophic decisions based on false data.
An even partially hallucinated intelligence assessment could feed into a targeting recommendation and be approved in seconds, potentially actioned by an autonomous drone swarm. The faster integrated systems operate, the harder it is for humans to detect errors and course correct. In a recent US Navy maritime test, a rogue drone caused a tugboat to capsize after escaping human control for only three minutes, leading to a Coast Guard rescue of the captain. With nuclear powers vying to use AI-enabled systems in tinder box regions of the world, the stakes couldn’t be higher.
Decision support systems leave humans vulnerable to automation bias, disproportionately trusting output and recommendations on the false belief that algorithms and software surpass human capabilities.
Autonomous weapons systems cannot distinguish between a combatant and a child, let alone recognize the act of surrender or the value of hesitation. Algorithms perceive patterns, not context, which is why they fail to understand the complex and fast-changing behaviors of humans in war. It is for this reason that the International Committee of the Red Cross and a broad coalition of civil society organizations advocate for a prohibition of autonomous weapons that specifically target humans.
Even if these systems grow more capable, the risks will not disappear. The same capability that could distinguish between ‘friend’ and ‘foe’ could also be used for targeted killings of specific individuals or groups of people. Highly capable and accessible autonomous weapons could proliferate widely, enabling regimes and terrorists to use them for atrocities — extrajudicial killings, assassinations, terrorism, or genocide.
AI-enabled systems are known to cause cognitive overload for humans, producing more data than humans can process. The increasing number of targets identified with AI at machine speed raises the question of whether automated systems are outpacing humans’ ability to fully verify those targets. Recent AI-targeting in Gaza has shown human operators spending mere seconds to verify and approve a target strike.
Moreover, AI-enabled targeting and decision support systems are leading to action bias. Humans interacting with machines in various domains, both civil and military, are known to experience cognitive atrophy when over-relying on AI systems. Militaries must maintain meaningful human control in the face of cognitive overload and atrophy.
Worryingly, we are seeing militaries use AI-enabled decision support systems with interfaces that resemble video games. Some of these interfaces set high quotas for hit rates and encourage execution through point systems that reward individual battalions with further weaponry and resources. Minimal human involvement in the kill chain is a feature, not a bug.
The companies that sell these systems say they enable killing with precision. What they actually do is enable killing at scale. AI-enabled systems have led to an unprecedented level of targeting and strikes that is overwhelming not only human cognitive capabilities but the legal systems in place to protect civilians and minimize harm.
In the first four days of Operation Epic Fury in Iran, the US and Israel claimed to have hit 4,000 targets.This is more than the first six months of the bombing campaign against ISIS. The US reportedly aims to achieve 1000 strikes in one hour. International law as it stands, including the Geneva Conventions, cannot account for the accumulated destruction and civilian toll caused by AI-generated targeting. International humanitarian law obliges militaries to carry out proportionality assessments per strike, weighing potential civilian harm against military necessity. A single strike leading to the death of ten civilians can be deemed lawful under International Humanitarian Law. But if a military carries out 1,000 hypothetically lawful strikes in one day, leading to thousands of civilian deaths, has the law fallen behind? AI-enabled targeting can allow a military to cause unprecedented civilian death tolls, similar to those of indiscriminate bombing, whilst claiming data-driven precision.
The scale of potential attacks that AI enables, compounded with the inaccuracy and unpredictability of AI systems, leads to an unacceptable level of catastrophe and violence. Automation bias, cognitive overwhelm, and gamified interfaces are leading militaries to yield control. In this race to the bottom, we all lose.
We face an urgent global governance gap in relation to this issue. Decisions about life and death should never be delegated to algorithms without robust oversight. We need legally binding national and international rules requiring meaningful human control over autonomous weapons, which the Future of Life Institute has long campaigned for. That means ensuring clear human responsibility for every targeting decision, rigorous legal and ethical review before deployment, verified system reliability and transparency, and strict limits on where, when, and how these systems can be used.
We have now seen a broad and undeniable chorus of voices calling for strong rules on military AI, including Ukraine President Volodymyr Zelenskyy, Pope Francis, Pope Leo XIV, Anthropic, 1,035 employees at Google and OpenAI, the UN Secretary-General and the ICRC President. 130 governments have declared support for a legally binding instrument on autonomous weapons.
We cannot afford to wait any longer. We must set global norms before these technologies become entrenched in how wars are fought, but the window to act is narrowing. How many more atrocities will we allow before we act?
