March 07, 2026
War has always been shaped by technology. From gunpowder to nuclear weapons, each technological shift has altered not only the battlefield but also the moral and political calculations that govern conflict.
Yet the integration of artificial intelligence into modern warfare may represent a transformation far more profound than previous military revolutions. The unfolding confrontation involving Iran, the US and Israel is increasingly being described as the world’s first full-scale AI war, a conflict in which machine learning systems, algorithmic targeting tools and autonomous weapons are deeply embedded in operational decision-making. If this characterisation proves accurate, the consequences will extend far beyond the Middle East.
At the centre of this emerging paradigm is the growing use of AI systems to accelerate and automate key military functions. Advanced machine-learning platforms are now being used for operational planning, target identification, intelligence analysis, logistics coordination and battlefield simulation. Tools such as Claude AI, deployed for military operational planning, illustrate how commercial AI platforms are increasingly integrated into defence applications. AI models originally designed for business analytics or natural language processing are now assisting with operational decision-making in complex military environments. This shift blurs the line between civilian technology ecosystems and military infrastructure in ways that policymakers have barely begun to understand.
Even more consequential is the continued expansion of Project Maven, a machine-learning platform developed by the US Department of Defense to automate aspects of the military ‘kill chain’. Project Maven was originally introduced to process surveillance footage and identify potential targets faster than human analysts could. Over time, however, its capabilities have expanded significantly. Developed with the involvement of major technology firms, including Palantir Technologies, Amazon Web Services, Microsoft and satellite intelligence provider Maxar Technologies, the system now contributes to target recognition, operational planning and intelligence synthesis. In effect, it reduces the time between identifying a potential target and executing a strike, profoundly altering the tempo of modern warfare.
On the battlefield itself, AI is increasingly embedded in weapons platforms. A striking example is LUCAS drone, a reverse-engineered adaptation of Iran’s widely used Shahed-136 design. Unlike traditional drones that rely heavily on remote operators, AI-enabled systems like LUCAS can operate with significant autonomy, coordinating with other drones in swarms and adapting to battlefield conditions in real time. Autonomous swarm capability represents a dramatic shift in warfare: instead of a single expensive aircraft piloted by a human operator, militaries can deploy dozens or hundreds of AI-directed drones that cooperate algorithmically to overwhelm defences.
Israel has also been at the forefront of integrating AI into military targeting operations. Systems such as Habsora and Lavender assist in identifying potential strike targets and determining the operational value of attacks. These systems are designed to analyse massive quantities of data, communications intercepts, satellite imagery, behavioural patterns and social network analysis to generate lists of individuals or infrastructure considered legitimate military objectives. The automation of such processes enables militaries to produce target lists at an unprecedented scale.
However, the same efficiency that makes AI attractive to military planners also introduces profound ethical risks. Certain targeting systems incorporate algorithmic calculations that weigh expected civilian casualties against the perceived strategic value of eliminating a specific target. In some scenarios, civilian losses in the dozens or even hundreds may be deemed acceptable if the algorithm assigns sufficiently high value to the target. When decisions about life and death are partially delegated to machines operating on statistical models rather than human judgment, the moral boundaries of warfare become dangerously blurred.
Iran appears to understand the implications of this technological shift. One of its strategic responses has involved attacks on regional digital infrastructure, particularly data centres linked to cloud computing providers. Strikes against facilities connected to companies like Amazon in the UAE and Bahrain disrupted regional cloud services and highlighted a new dimension of modern conflict: the targeting of digital infrastructure that supports AI-enabled warfare. In an era where military algorithms rely on massive computational resources and real-time data processing, cloud infrastructure has effectively become part of the battlefield. Data centres, satellite links and digital networks are now as strategically significant as airbases or naval ports.
The broader danger of AI-driven warfare lies in the phenomenon known as decision compression. Traditionally, military planning involves a series of deliberative stages: intelligence gathering, analysis, strategic discussion, and command authorisation. AI dramatically accelerates these processes. Algorithms can analyse thousands of potential targets in minutes, simulate battle outcomes in real time and recommend operational responses faster than human analysts can review them. While this speed can offer tactical advantages, it also reduces the opportunity for reflection and restraint. In a conflict environment, faster decisions often mean less scrutiny and fewer safeguards against mistakes.
This compression of decision-making timelines also changes the role of human operators. In theory, humans remain ‘in the loop’ responsible for approving AI-generated recommendations before actions are taken. In practice, however, the complexity and speed of algorithmic systems can create a dynamic in which human operators become heavily reliant on machine recommendations. When an AI system flags a target with a high probability score and provides a recommendation for immediate action, operators may feel pressured to approve the strike quickly rather than challenge the algorithm’s assessment. Over time, this dynamic risks turning human oversight into a mere procedural formality rather than a meaningful safeguard.
Another major concern is the opacity of AI systems. Many modern machine-learning models operate as black boxes, meaning that even their designers cannot fully explain how they arrive at specific conclusions. When such systems are used for targeting or operational planning, military commanders may find themselves relying on recommendations they do not fully understand. This creates a paradoxical situation: decision-makers are expected to take responsibility for actions guided by systems whose internal logic remains largely opaque.
Beyond the battlefield, there are also troubling implications for domestic governance and civil liberties. Technology companies involved in military AI development gain access to enormous datasets generated during conflict operations. These datasets include surveillance imagery, behavioural analytics and communications metadata, precisely the types of information that can be used to train algorithms for policing and population monitoring. Once developed, such tools can easily migrate from military use abroad to domestic applications, including riot control, predictive policing or counterinsurgency operations against internal dissent.
Recent research underscores just how unpredictable AI decision-making can be in strategic scenarios. A study by researchers at King’s College London examined how advanced AI models behave in simulated geopolitical war-gaming environments. The findings were deeply unsettling: in approximately 95 per cent of scenarios, AI systems chose to escalate conflicts towards nuclear confrontation when such options were available. The algorithms, operating according to strategic optimisation logic, concluded that early escalation offered the highest probability of ‘victory’. While these simulations do not mean that AI systems would automatically trigger nuclear war, they highlight how algorithmic decision-making can produce outcomes radically misaligned with human notions of restraint and deterrence.
The evolution of AI warfare did not begin with the Iran conflict. The war in Ukraine served as an experimental laboratory where both sides tested AI-assisted reconnaissance, drone swarms and automated battlefield analytics. Meanwhile, in the Gaza Strip, large-scale AI-assisted targeting of military operations was carried out. What is emerging now in the confrontation involving Iran appears to be the next stage of the industrial-scale integration of AI across nearly every aspect of warfare.
If this trajectory continues, the future battlefield may resemble automated industrial production more than traditional combat. Algorithms will identify targets, autonomous drones will execute strikes, and machine learning systems will analyse results and generate the next set of recommendations, all within compressed timeframes that leave little room for human deliberation. The danger is not simply that AI will make war more efficient but that it may also make war easier to wage and harder to control.
For the global community, the implications are profound. International law, humanitarian norms and military doctrines were developed for an era when human decision-makers remained central to the conduct of war. AI challenges that assumption. Without clear rules governing autonomous weapons, algorithmic targeting and military AI deployment, the world may soon face conflicts in which machines operate at speeds and scales beyond human oversight.
The writer is a trade facilitation expert, working with the federal government of Pakistan.
Disclaimer: The viewpoints expressed in this piece are the writer's own and don't necessarily reflect Geo.tv's editorial policy.
Originally published in The News