Dr Pia Hüsch, Prerana Joshi and Noah Sylvia, RUSI / Translation by iPress
Researchers from the Royal United Services Institute (RUSI) in the field of digital technologies and their military application, Dr. Pia Hüsch, Prerana Joshi, and Noah Sylvia emphasize: the hype around the role of AI in the war in Iran overshadows a deeper issue – modern armies have traded deliberation for speed. AI is deeply embedded in the entire US military machine – from logistics and training to targeting and cyber defense, so focusing only on strikes doesn’t reveal the full picture of the normalization of these technologies. The authors warn that even the strictest precautions do not eliminate the risk that the technology may dictate human decision-making rather than assist it, and for Europe, this war may be yet another push towards technological sovereignty.
In the frantic media cycle, one theme stubbornly persists: the role of artificial intelligence in defense. Claims of over 1,000 targets struck by the US Army in the first 24 hours of the war in Iran demonstrated a military pace and scale unattainable by human efforts alone. Questions, rumors, and misunderstandings about defense AI have gained new life.
Before the AI Boom: Decades of Military Algorithms
The implementation of AI is now gaining momentum in both classified and unclassified defense systems, transforming decision-making pace in the military. Defense and intelligence structures continue to seek ways to synthesize their vast arrays of disparate data and ultimately refine cycles of gathering, analyzing, and disseminating intelligence to gain some form of advantage over adversaries.
However, while the basic principles are by no means a new phenomenon for American and Israeli military architectures, the spread of these tools at all levels of the defense structure deserves greater attention. As AI tools become an integral part of corporate IT infrastructure and operational systems for carrying out combat missions, they become normalized and carry the risk that human judgment will be exchanged for the pursuit of speed.
Demystifying the Use of AI in Targeting Cycles
A significant portion of media attention has been drawn to the use of AI by the US Army for targeting in the war in Iran. Reports say Claude by Anthropic was used in the Palantir Maven Smart System (MSS) to aid in American targeting, allowing thousands of targets to be struck in just days. This claim immediately caused an uproar, and parallels were quickly drawn to the Israeli use of AI for targeting in the genocidal campaign in Gaza.
Since the Algorithmic Cross Warfare Team (better known as Project Maven and infamous due to Google employee strikes in 2018) first sought to integrate AI into the Department of Defense workflows nearly a decade ago, AI has become an indispensable part of the digital modernization of targeting tasks – not only in the US but also in armies such as China’s, Russia’s, Ukraine’s, and the UK’s.
The novelty seems to be the scale and pace of joint U.S.-Israeli operations in Iran. The U.S. has never conducted precision strikes at such a pace in its modern history – an achievement that appears to have been made possible by the widespread implementation of AI in their digitized systems.
As detailed in other publications, algorithms do not completely replace humans – they do not simply take all raw data and produce a ready, approved target profile. Instead, external data models are integrated into Palantir MSS, which itself is hosted in Amazon Web Services. MSS is a platform that integrates data from a wide range of sources into a single dashboard, offering a suite of decision-making workflow tools, particularly for targeting. Many of these tools employ AI and are used to enhance capabilities or perform specific tasks in the targeting cycle, such as:
- merging data and intelligence information;
- detection, identification, classification, and tracking of objects and people of interest;
- creating, modeling, testing, refining, and prioritizing courses of action (COA);
- visualizing intelligence or courses of action;
- guiding, tasking, coordinating, and synchronizing combat means and fire;
- collateral damage assessments;
- assessing post-strike outcomes.
The level of machine automation varies among these tasks, with several Western armies establishing “appropriate” or “context-dependent” standards for required human involvement. The flexibility of these formulations focuses on human judgment rather than rigid human control over every action on the battlefield. However, automation does not absolve responsibility: even if a target is struck without explicit human decision, there must be commanders and analysts held accountable for any errors or miscalculations leading to civilian casualties.
For example, the American strike on a school in Minab, which claimed the lives of nearly 200 civilians, appears to have been the result of incorrect historical data. This is a human, not a machine error, caused by a lack of diligence in the intelligence targeting cycles and likely exacerbated by the weakening of civilian harm mitigation bodies. Various AI-based tools were almost certainly involved in tracking, modeling, and/or weapon selection for this target, but it is the humans – commanders, analysts, and technical experts – who should bear responsibility both legally and in public discourse.
Given the sensitivity and technical-operational complexity of the target engagement cycle, the degree of human oversight over AI-based functions in Iran remains unclear. However, even if urgency dictates target selection under pressure, we must hold the Ministry of Defense accountable for how their internal processes adapt to AI implementation. Internal accountability mechanisms must remain robust and transparent despite the complexity of establishing responsibility for fast, complex, and often automated tasks. Models and workflows need continuous improvement and testing to incorporate lessons learned and prevent the repetition of mistakes. If assessments of proportionality are automated, those responsible for high-level risks must understand their accuracy and how their operators prioritize minimizing harm to civilians. More broadly, modern armies must work to ensure operational efficiency and tactical impact do not substitute for thoughtful, lawful strategy.
Beyond Targeting: Realities of Deploying Defense AI
AI-based targeting has monopolized attention. This singular focus risks overlooking other applications in defense and obscures the quieter ways in which U.S. military operations are being reshaped at the levels of planning, coordination, and execution. To understand the impact of AI on the defense sector, the discussion must analyze how different forms of AI tools—predictive, generative, agent-based, and others—are utilized differently in military applications, and question where and why safeguards might diverge.
- Logistics. Auxiliary functions are often cited as primary areas for the use of defense AI – from enhanced supply chains for faster coordination of fuel and ammunition delivery to predictive algorithms that optimize military equipment maintenance schedules. The U.S. Defense Logistics Agency (DLA) reports over 200 use cases and 55 AI models at various stages of deployment in their operations. It is easy to equate these cases with more mundane and ubiquitous productivity-enhancing technologies, thus risking narrowing the public discussion about their impact on shaping military operations.
- Training and Training Maneuvers. With varying degrees of complexity, the U.S. Army uses advanced AI toolkits for personnel training, where operational scenario options are created at superhuman speed to simulate real situations. The importance of this is highlighted in the 2026 U.S. Department of Defense AI Strategy, which states that if training maneuvers do not anticipate AI, they are subject to review by cost-control program directors.
- Translation and Linguistics. In less noticeable cases, AI in natural language processing (NLP) systems can be used to sift through and synthesize large volumes of data – like intercepted communications – from multiple language sources, again at a pace. Such linguistic synthesis could offer efficiency to human translators, but challenges regarding disparate and limited data points in classified environments remain across defense for AI deployment in such contexts.
- Cybersecurity. In cyberspace, AI can be engaged in operations. Given the potential to support defensive cyber operations in U.S. federal and national government systems and security operations monitoring centers, the intersection of AI with cyber defense deserves closer attention.
Distinguishing between predictive and generative models is equally important to consider. Use cases that go beyond the predictive analysis of AI to support logistics and maintenance schedules towards generative AI models may create additional risks and “may have a weak understanding of the nuances of military policy”. Although the U.S. Army has made considerable efforts in ethical considerations in AI usage, challenges remain – especially regarding excessive reliance of military personnel on these models. Given that decision-making cycles for targeting in the Iran conflict are compressed, does the relative speed of human decision-making remain unchanged? Regardless of the presence of strict safeguards, this interdependency of humans and machines operating at different paces risks technology driving human decision-making rather than aiding it.
Deploy or Divorce: A Technology Provider’s Classified Nightmare
The breadth of these use cases illustrates the ingrained nature of AI tools throughout the chain of U.S. military and intelligence structures. Although the U.S. Department of Defense has invested in diversifying its pool of commercial suppliers, contracting $800 million with OpenAI, Anthropic, Google, and xAI in 2025, the ability to deploy complex data processing models in classified environments goes beyond programming and requires an understanding of military nuances, a resource that remains scarce in this domain.
Despite the fact that highly classified environments are shrouded in secrecy, debates about the future deployment or separation of AI models behind closed doors are becoming increasingly public. In a recent letter, Google employees urged their CEO to “reject any classified workloads” from the U.S. government. On another front, Anthropic became the first to deploy its systems in a classified environment, and its “divorce” procedure from the Department of Defense predictably turned out to be complicated: separating assets, personnel, and knowledge instead of just, as they say, “disconnecting” Claude AI from U.S. defense. Across the Atlantic, the discussion about over-dependence on American technology suppliers in sensitive systems now needs to accelerate considering the conclusions drawn from these high-profile processes involving American companies in military and intelligence systems.
Across the Ocean: Implications for European Partners
Factual assessment of AI use in defense may play only a secondary role in shaping broader European public perception of this topic. Instead, public opinion may be guided by broader, media-visible reports on the use of AI in war. Given the already low approval rating of the American-Israeli attack among Europeans, this could be another impetus for greater independence from American technology and defense policy.
Amid strained relations with the second Trump administration, European allies are already actively seeking European technological sovereignty, particularly for AI applications in defense. But the growth of European companies offering alternatives to American defense technology suppliers is not only a matter of trust but also direct economic competition, especially where non-American origin is perceived as a product advantage.
If public perception of AI use in the Iranian conflict continues to be associated with civilian casualties, with a war many view as contrary to international law, and in which even the closest U.S. allies like the UK are hesitant to fully support American actions, this could become another impetus for the European defense techno-ecosystem. However, whether European AI capabilities can actually select targets more discerningly or whether European military commanders will use them differently in accordance with protocols, doctrines, and ethical concepts significantly different from American and Israeli ones is another question.
