A growing chorus of experts is raising urgent concerns about the military application of artificial intelligence following Israel’s unprecedented use of autonomous targeting systems in recent conflicts. The debate has intensified as evidence emerges of AI-driven targeting with minimal human oversight in both Gaza and Iran.
Trita Parsi, executive vice president at the Quincy Institute for Responsible Statecraft, highlighted disturbing parallels between Israeli operations in Gaza and Tehran. “In both cases, it appears Israel is using AI without any human oversight,” Parsi stated on social media platform X. He cited the bombing of a Tehran park mistakenly identified as a police facility due to its name—’Police Park’—which AI systems apparently targeted based on keyword association rather than actual military function.
Technology journalist Jacob Ward has characterized Israel’s post-Gaza AI deployment as “lethal beta” testing, describing Gaza as the “prototype” and Iran as the “launch.” This approach represents what Ward calls “a live-fire, live-ordnance lab experiment on people” that has subsequently become “a big industry in Israel” with export potential worldwide.
The controversy centers on systems like Habsora, an Israeli AI platform capable of automatically selecting airstrike targets at exponentially faster rates than traditional methods. According to intelligence sources, this technology has transformed military operations into what one insider described as a “mass assassination factory” prioritizing “quantity over quality” of targets.
Recent investigations reveal troubling operational protocols. Following the October 2023 Hamas-led attack, mid-ranking IDF officers received authorization to target any Hamas fighter regardless of rank, with initial civilian casualty limits of 20 per airstrike and 500 daily being quickly abandoned. Senior commanders occasionally approved strikes expected to kill over 100 civilians when targeting high-value objectives.
One particularly devastating AI-assisted strike on Jabalia refugee camp in October 2023, targeting a single Hamas commander with multiple 2,000-pound bombs, killed at least 126 people including 68 children while wounding 280 others. Hamas reported that seven hostages also perished in the attack.
The United States has similarly advanced its AI warfare capabilities, with the Washington Post reporting that Palantir’s Maven Smart System—incorporating Anthropic’s Claude AI language model—assisted US commanders in selecting 1,000 Iranian targets within the conflict’s first 24 hours.
Paul Scharre of the Center for a New American Security emphasized the critical need for human oversight: “AI gets it wrong… We need humans to check the output of generative AI when the stakes are life and death.”
The rapid militarization of AI has prompted international response. Academics and legal experts recently convened in Geneva for UN-sponsored talks on lethal autonomous weapons systems, examining risks posed by machines that can select and engage targets without meaningful human control. Craig Jones, a military targeting researcher at Newcastle University, warned that “the current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent.”
Contrary to claims that AI reduces civilian harm, Jones stressed that “there is no evidence that AI lowers civilian deaths or wrongful targeting decisions—and it may be that the opposite is true.”
The technological infrastructure enabling these systems includes Project Nimbus, a $1.2 billion cloud-computing and AI contract between the Israeli government and tech giants Amazon Web Services and Google Cloud. The agreement specifically prohibits the companies from refusing service to Israeli military or intelligence agencies.
As autonomous weapons systems continue evolving without comprehensive international regulation, the ethical and practical implications of AI-driven warfare remain among the most pressing security concerns of the modern era.
