Unregulated AI use in warfare causing civilian tragedy, raising global legal alarm

Cyber warfare targets civilian services, raising serious human rights concerns

By |
A message reading AI artificial intelligence, a keyboard, and robot hands are seen in this illustration taken on January 27, 2025. — Reuters
A message reading "AI artificial intelligence", a keyboard, and robot hands are seen in this illustration taken on January 27, 2025. — Reuters

Modern warfare is witnessing a dangerous shift as artificial intelligence (AI) and emerging technologies are being used recklessly, unethically, and brutally — giving rise to what experts warn could become a major human tragedy.

Cyber warfare has completely reshaped military strategy. While digital tools may bring operational advantages, their use has led to the destruction of civilian infrastructure and the deaths of thousands of innocent people in war-torn countries — raising serious violations of human rights. Experts also question the lack of protection for data managed by major tech giants.

Dr Rana Shahzad, an expert in cyber security, said that digital attacks have inflicted severe damage to civilian infrastructure, including power stations, water facilities, and banking systems. He warned that using AI-driven data without understanding ground realities in warfare amounts to a grave breach of both human rights and international laws.

He cited the United Nations Secretary General Antonio Guterres, who expressed “deep concern” over Israel's use of AI in conflict. AI experts have also warned that such systems may constitute war crimes, as they cannot distinguish between civilians and combatants.

According to The Washington Post and other international media, Israel used AI software like Gospel and Lavender in Gaza attacks, reportedly targeting Hamas commanders such as Mohammed Deif and Yahya Sinwar. However, these attacks also killed hundreds of innocent Palestinians, including women, children, journalists, health workers, and elderly citizens. Civilian buildings like schools, hospitals, and welfare institutions were also destroyed. The AI systems reportedly failed to distinguish between military and civilian targets — a serious violation of international law.

In the Israel–Iran conflict, Israeli hackers conducted cyberattacks that destroyed data from Sepah Bank and threatened to publish stolen data from an Iranian cryptocurrency exchange. Blockchain investigators later confirmed that during the breach, nearly $81.7 million worth of digital assets were stolen from Nobitex wallets. In retaliation, Iran targeted Israeli infrastructure including water systems, transportation control, and surveillance networks.

AI-based cyber warfare was also used during the May military skirmish between Pakistan and India, where official websites were hacked, power systems jammed, and AI-powered drones and missiles were deployed.

Earlier, in the 2015 Russia–Ukraine conflict, a cyberattack on Ukraine's power grid left over 200,000 people without electricity in severe winter conditions. Human Rights Watch has warned that while digital tools and apps are not conventional weapons, their use in military operations must comply with international humanitarian law.

Experts say there is now a growing need for strict global regulations and clearly defined rules of engagement for the use of AI in warfare. They also stress the importance of data protection by tech companies and technological self-reliance among nations to prevent data theft and maintain secure systems.