Questions over AI capability as tech guides Iran strikes

The recent escalation of military operations involving the United States, Israel, and Iran has witnessed an unprecedented integration of artificial intelligence into combat strategies, sparking intense debate about the technology’s role in modern warfare. According to defense analysts, AI systems have been deployed to process intelligence and identify targets with remarkable efficiency, marking a significant evolution in military tactics.

Multiple reports indicate sophisticated AI algorithms have guided Israeli operations in Gaza and assisted in the capture of Venezuelan leader Nicolas Maduro during an American-led mission. Military experts further suggest artificial intelligence has played a crucial role in selecting targets for thousands of U.S. and Israeli strikes on Iranian positions since late February, though specific operational details remain classified.

Laure de Roucy-Rochegonde of the French Institute of International Relations (IFRI) notes that “every military power of any significance invests hugely in military applications of AI,” emphasizing that “almost any military function can be boosted with AI”—from logistics and reconnaissance to electronic warfare and cybersecurity. The technology has been integrated into semi-autonomous attack drones and various weapon systems, with one of its most significant applications being the dramatic reduction of the “kill chain”—the critical timeframe between target detection and engagement.

Among the prominent systems in use is the Maven Smart System (MSS) developed by Palantir, designed to identify and prioritize potential targets. Recent reports from The Washington Post indicate that Anthropic’s Claude generative AI model has been integrated with Maven to enhance its detection and simulation capabilities, though neither company responded to requests for comment.

Bertrand Rondepierre, head of the French army’s AI agency AMIAD, explains that AI algorithms “allow us to move much faster in handling information, and above all to be more comprehensive” by processing enormous datasets including satellite imagery, radar signals, electromagnetic waves, audio recordings, and real-time video feeds.

However, the deployment of AI in combat scenarios raises profound ethical and legal questions regarding human oversight. The controversy gained prominence during Gaza operations where Israeli forces utilized the “Lavender” targeting system, which operated within defined error margins. While effective in limited areas with extensive surveillance infrastructure, experts question whether similar systems could be reliably deployed in Iran without comparable data networks.

Peter Asaro, chair of the International Committee for Robot Arms Control, highlights the accountability dilemma: “If something does go wrong, then who’s responsible?” He cites the widely reported bombing of an Iranian school—which local authorities claim killed 150 people—as a potential case of mistaken AI targeting. The proximity of the site to facilities controlled by Iran’s Islamic Revolutionary Guard Corps raises questions about whether targeting errors stemmed from human judgment, algorithmic failure, or outdated intelligence.

Rondepierre maintains that fully autonomous AI systems operating without human supervision remain “science fiction” in military applications, emphasizing that French military commanders retain central control over targeting decisions. He insists that “no military decision-maker would agree to use an AI if he didn’t have trust in and control over what it’s doing.”

Benjamin Jensen of the Center for Strategic and International Studies suggests the current implementation represents merely the “beginning” of AI’s military application, noting that armed forces worldwide “haven’t fundamentally rethought how we plan, how we conduct operations, to take advantage” of AI’s capabilities. He predicts it will take “a generation for us to really figure this out” as military institutions gradually adapt to incorporating artificial intelligence into strategic frameworks.