Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: [Big read] The dangers of unchecked AI on the battlefield in Simple Termsand what it means for users..
As AI is increasingly used in military action, how far do we go in letting AI decide who and what to strike? Is human accountability no longer applicable? Lianhe Zaobao associate foreign editor Poh Hwee Hoon tells us more.
(Edited and refined by Candice Chan, with the assistance of AI translation.)
“It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.”
This is the description of the killer robot in the classic science fiction film The Terminator, the chilling story of an AI-controlled assassin sent back in time to kill the mother of a boy who grows up to be the future leader of humanity’s resistance against AI.
When the film was released in 1984, such a scenario seemed far off. But as AI technology is now being widely applied in the latest outbreak of war in the Middle East, concerns are growing that the future depicted in the film is rapidly becoming reality.
On 28 February, the US and Israel launched military operations against Iran at an unprecedented speed — US forces struck 1,000 targets within just 24 hours.
Reports indicate this is “double the scale” of the airstrikes during the 2003 Iraq War, and far exceeds the 150 targets struck at the outset of Operation Desert Storm in 1991, launched to drive Iraqi forces out of Kuwait.
The US military has been able to do this because of its use of advanced AI technologies.
It is understood that the US military primarily uses the Maven intelligent system, developed in 2018 by data analytics giant Palantir. The system uses AI to analyse data, identify targets, and prioritise them.
The US military has also integrated the generative AI large language model Claude, developed by AI startup Anthropic, into the Maven system. Claude processes and synthesises frontline intelligence and generates targets, forming a real-time data analysis platform for operations against Iran.
According to reports, when planning the strikes, this platform generated hundreds of potential targets and prioritised them based on strategic importance. It also automatically matched specific military units and the most appropriate munitions — such as bunker-busting bombs for underground facilities or satellite-guided bombs for buildings — to each target. The system is also capable of simulating tactical scenarios, assessing the legality of strikes, and assisting with battle damage assessments.
On 11 March, Brad Cooper, head of US Central Command (CENTCOM), acknowledged that the US military had used “a variety” of AI tools in its war with Iran to help soldiers process vast amounts of data, though he did not disclose which tools were used.
Good at data processing, but may make up information
The Israeli military also uses AI-based systems, reportedly mainly Lavender and Gospel. In the Gaza war, Israel’s extensive surveillance systems fed data on residents in Gaza and other areas into Lavender, which was responsible for identifying and generating human targets, while the Gospel system primarily analysed buildings and terrain.
It [AI] excels at rapidly processing massive amounts of data, drastically shortening the “kill chain” — the entire decision-making and planning process from target identification, human approval, to execution of a strike…
It remains unclear whether Iran has deployed AI within its combat systems, or which systems it might be using. Iran claimed in 2025 that it would apply AI to missile targeting systems, but according to an analysis by The Guardian, international sanctions have left Iran’s AI programme negligible by contrast with the AI superpowers of the US and China.
The advantages of applying AI to the military are obvious. It excels at rapidly processing massive amounts of data, drastically shortening the “kill chain” — the entire decision-making and planning process from target identification, human approval, to execution of a strike — from hours or days to mere minutes or even seconds. This not only improves efficiency but also reduces manpower, shrinking teams for tasks that once required 2,000 analysts to roughly 20, allowing humans to focus on higher-level decision-making, logistics and mission planning.
AI has also been deployed in drones used for reconnaissance and strike missions, as well as ground robots that handle explosives. Having machines perform dangerous tasks keeps soldiers safer.
However, there are two sides to every coin. While AI holds great promise in military applications, it also raises concerns over safety and controllability.
First, AI-driven systems are vulnerable to hacking and data manipulation. If an AI platform is compromised, sensitive information could be leaked or even used by adversaries to feed false intelligence.
Second, anyone who has used AI chatbots knows they sometimes make mistakes, yet present those errors or misleading analyses eloquently and convincingly. In the high-pressure context of war, this could make commanders and analysts more likely to adopt their recommendations.
Peter Bentley, an honorary professor of computer science at University College London, told Lianhe Zaobao that the biggest problem right now with large language models is that when they have missing information, they “hallucinate”, which means they make things up.
… if an AI-generated error leads to a misjudgement, who should be held responsible — AI or humans?
He said: “We cannot tell if they have made up their findings or whether it is based on reality because AIs are so good at presenting plausible and realistic looking results. This means if you are targeting your weapons based on an AI analysis there is a very real chance you may not be hitting the right place. In the worst case that will mean innocent people (or your own people) are killed, all because someone believes the AI too much.”
Manoj Harjani, a research fellow at the S. Rajaratnam School of International Studies (RSIS) at Nanyang Technological University, also told Lianhe Zaobao: “One key set of concerns regarding the use of AI in the military domain stem from the difficulty in predicting how a system enabled by AI will behave, and the corresponding difficulty in understanding the reasons for that behaviour. This poses challenges for accountability and responsibility.”
In other words, if an AI-generated error leads to a misjudgement, who should be held responsible — AI or humans?
On the first day of the Iran war, a missile struck an elementary school in Minab, southern Iran, reportedly killing around 175 people. US media, citing informed sources, said the reason was outdated data that led to incorrect targeting coordinates.
The US has not admitted that its missile hit the school, stating only that an investigation is under way. When acknowledging the use of AI in combat, Cooper emphasised: “Humans will always make final decisions on what to shoot and what not to shoot and when to shoot.” Based on this, regardless of the investigation’s outcome, responsibility for casualties would lie with humans, not AI.
… the use of AI in the military is an unstoppable trend. Analysts believe that countries achieving decisive advantages in AI will control the pace of future conflicts.
Bentley stressed that, ultimately, humans must remain responsible for human lives. “Ultimately, humans should always be responsible for human lives, and even if tools are used to help us understand situations, we must be the ones to check the validity of the tools, and if we really must kill each other then we should do it, and not abrogate responsibility to a machine.”
Huge challenges in responsible use of AI in military
Looking ahead, the use of AI in the military is an unstoppable trend. Analysts believe that countries achieving decisive advantages in AI will control the pace of future conflicts. As such, competition in the 21st century will centre on dominance in AI.
At the same time, however, a balance must be struck between operational efficiency and ethical responsibility. Beyond investing in AI technologies, safeguards, regulations and fail-safe mechanisms must be established to ensure responsible use.
Even as the US military deployed AI models developed by Anthropic in its strikes on Iran, the Pentagon reportedly clashed with Anthropic over safety red lines for military AI.
The dispute arose when the Pentagon requested that Anthropic sign a contract allowing the military unrestricted access to its technology for “all lawful purposes”.
However, Anthropic CEO Dario Amodei insisted that the contract must include two exceptions: no mass surveillance of US citizens, and no development of fully autonomous, unsupervised weapons.
Anthropic’s stance provoked strong dissatisfaction from the White House. On 27 February — the day before the US strike on Iran — US President Donald Trump instructed all federal agencies to immediately cease using Anthropic’s technology, while Defense Secretary Pete Hegseth ordered that Anthropic and its products be placed on a “supply chain risk” list as a threat to national security, an extremely rare measure historically reserved for foreign adversaries. Once listed, all Defence Department contractors would be prohibited from engaging in any commercial transactions with Anthropic.
One of the core issues in the dispute is autonomous weapons.
Autonomous weapons are inevitable — but will humans control them, or be controlled by them?
Lethal Autonomous Weapon Systems (LAWS) refer to weapons that can independently search for, identify, and attack targets using AI without human intervention. Critics often call them “killer robots”, the same term used for the T-800 robot portrayed by Arnold Schwarzenegger in The Terminator.
Many weapons today already possess some autonomous functions — missiles that can independently identify and strike targets, unmanned submarines that can clear mines, and drones capable of forming swarms and carrying out missions independently.
Drones have been widely used on the Ukrainian battlefield, though they are not fully autonomous, as humans still control them remotely.
With rapid advances in AI and robotics, fully autonomous weapon systems are only a matter of time. Yet it is troubling that there is currently no dedicated international treaty governing the use of AI in armed conflict, let alone regulating autonomous weapons.
A 10 March editorial on the website of the scientific journal Nature said international humanitarian law states clearly that weapons must not be used indiscriminately, and combatants must take precautions to verify their targets and minimise the risk of civilian casualties. These requirements should apply to AI as much as to any other military technology.
Mei Ching Liu, an associate research fellow with the Military Transformations Programme at RSIS, noted that the international community has been discussing the use of AI in the military domain for more than a decade. “Since 2016, the primary platform for these discussions has been the UN Group of Governmental Experts on Lethal Autonomous Weapon Systems (GGE on LAWS).”
She explained: “The current mandate of the GGE on LAWS is to formulate a set of elements for an instrument to address the issues raised by LAWS. The nature of the instrument has yet to be decided. In other words, the GGE does not have a mandate to negotiate a legally binding treaty. This mandate is set to expire at the end of this yeart. It is then up to the State Parties to the Convention on Certain Conventional Weapons to decide whether to extend the GGE’s term and, if so, under what specific mandate. Therefore, the short answer to the question of whether a legally binding agreement is possible this year is no.”
She added that broader discussions on military AI are also taking place in other forums, including the Responsible Artificial Intelligence in the Military Domain (REAIM) initiative and the First Committee of the UN General Assembly. The third REAIM summit, held in Spain from 4 to 5 February, saw its outcome document, Pathways to Action, endorsed by only around 40 countries — significantly fewer than in previous summits.
Liu said: “While the outcome document is not legally binding, it serves as an important political signal regarding the commitment to adhere to specific principles for the development and use of AI within the military domain. This decline in support is indeed concerning.”
… if the ultimate power over life and death is handed to machines with no emotion or moral judgement, will it lead to unnecessary killing? Will humans still be able to control AI — or will they themselves be controlled?
Harjani feels that while these conversations may not seem like much in terms of imposing constraints on states’ behaviour, but they are an important avenue for states and other actors — including from the private sector — to exchange views and understand each other’s positions. “Through these exchanges, the aim should be to improve mutual understanding and encourage norms around responsible behaviour related to the use of AI in the military domain.”
Experts urge nations to retain control over AI
While regulatory measures are not yet in place, countries are unlikely to halt their pursuit of more powerful and autonomous weapons in the race for military advantage. When the technology matures, if the ultimate power over life and death is handed to machines with no emotion or moral judgement, will it lead to unnecessary killing? Will humans still be able to control AI — or will they themselves be controlled?
To avoid losing control of AI, Bentley warned that the best course is not to entrust our lives to these non-human intelligences.
Drawing the analogy of driving a train, he said that even if other countries appear to be advancing at greater speed in the AI arms race, one must not relinquish control, and must be prepared to intervene or apply the brakes when necessary.
“The answer is: don’t jump out of the driver’s seat. Keep driving it. And don’t be afraid to apply the brakes. Just because you can see other AI runaway trains travelling at a frighteningly high speed doesn’t mean you have to copy them.”
