AI in Armed Conflict: Application of International Humanitarian Law to Autonomous Weapons

Published On: September 30th 2025

Authored By: Ayush Chaudhary
Shobhit University, Gangoh(SUG)

Abstract

The rapid integration of Artificial Intelligence (AI) into military operations has introduced Autonomous Weapons Systems (AWS) armed devices capable of selecting and engaging targets with limited or no human intervention. These systems challenge foundational principles of International Humanitarian Law (IHL): distinction, proportionality, and precaution. This article critically examines the sufficiency of current legal frameworks in governing AWS, focusing on accountability gaps, interpretive challenges, and indicators from state practice. It includes emerging battlefield examples from Ukraine, analyzes legal positions in international forums, and highlights the Indian perspective. The conclusion emphasizes the urgency of reform via binding norms, meaningful human control mandates, and enhanced accountability mechanisms to prevent humanitarian degradation in the era of smart warfare.

Introduction

AI-enabled weaponry increasingly transforms modern combat, with systems like advanced drones, sentry turrets, and loitering munitions operating autonomously to identify and strike targets. These developments test the limits of IHL conceptualized around human decision-makers in conflict^1.

Recent deployments in Ukraine including AI-assisted drone swarms and the high-profile “Operation Spiderweb” offensive demonstrate that these technologies are not speculative but real and lethal^2. Without legal clarification, AWS may operate beyond the protective intent of IHL. This article explores this juncture of law, ethics, and technology, arguing that regulatory adaptation is not only necessary but urgent.

I. International Humanitarian Law Basics

IHL is built on the Geneva Conventions (1949), their Additional Protocols (1977), Hague Conventions (1899/1907), and customary law recognized by the ICRC^3. Its core principles:

  1. Distinction – differentiate civilians and combatants^4;
  2. Proportionality – avoid excessive civilian harm versus military gain^5;
  3. Precaution – take all feasible measures to minimize harm^6.

AWS, with limited judgment and high complexity, may undermine these norms.

II. AWS Technologies & Real-World Deployment

The U.S. defines AWS as systems that, once activated, select and engage targets without further human intervention^7. Classification includes semi-autonomous, supervised autonomous, and fully autonomous systems^8.

Ukraine is a testing ground: AI-powered drones coordinate autonomously, bypassing signal jamming and scaling via swarm tactics^9. The Auterion guidance kits, enabling strike drones to independently lock onto targets, have massively expanded this automation^10. The dramatic “Operation Spiderweb” strike deep inside Russia exemplifies how machine learning can orchestrate complex drone missions with minimal human control^11.

III. Evaluating IHL Principles with AWS

A. Distinction

Article 48 of Protocol I mandates that combatants discern legitimate targets^12. AI perception systems may misidentify targets in complex urban or battlefield scenes due to ethics-blind processing and adversarial interference.

B. Proportionality

Article 51(5)(b) prohibits disproportionate attacks^13. AWS lack the moral calculus required to assess such trade-offs, especially under evolving battlefield conditions.

C. Precaution

Under Article 57, parties must take precautions to reduce civilian harm^14. Without meaningful human control, AWS may repeat or escalate attacks due to misaligned or outdated inputs.

IV. Accountability & Legal Gaps

As AWS make autonomous decisions, accountability becomes opaque. Is culpability attributable to the commander giving orders, the manufacturer designing the system, or the state deploying it? None of these frameworks addresses the lack of direct human agency. This gap was severely noted by the ICRC, calling for sustained human responsibility across the targeting chain^15.

V. International & Indian Legal Responses

On the international stage, states are split:

  • Ban advocates (Austria, Chile): push for outright prohibition^16.
  • Regulatory proponents (Germany, Japan): support constrained control frameworks.
  • Status quo defenders (U.S., Russia, Israel): argue that IHL suffices^17.

India has maintained a cautious posture pushing for deeper definitions and rigorous debate before committing to new norms. Domestically, military AI integration efforts remain subject to standard IHL adherence, but no specific AWS policy has been announced^18.

VI. Ethical & Strategic Implications

Granting machines authority to kill disrupts the moral and emotional accountability of warfare and risks normalizing lethal automation. Proponents argue AWS reduce collateral damage; critics caution that remote distance can desensitize human control and entrench warfare by lowering friction.

Conclusion & Recommendations

AWS are transforming conflict. We must act to ensure IHL not only governs these systems but evolves with them. I recommend:

  • Legal codification of Meaningful Human Control in all AWS deployments.
  • Enhanced Article 36 reviews, including AI-specific operational testing.
  • Transparency and data-sharing across national review processes.
  • Negotiation of a binding treaty or non-binding declaration establishing operational norms.
  • Implementation of technical fail-safes, explainable AI standards, and post-action audit trails.

IHL’s protective mandate must remain central regardless of whether it is a human or a machine issuing the final command.

References

  1. Paul Scharre, Army of None: Autonomous Weapons and the Future of War 13–16 (W.W. Norton 2018).
  2. Time, “Ukraine Just Demonstrated What AGI War Could Look Like” (June 1, 2025) (Operation Spiderweb)  .
  3. Jean-Marie Henckaerts & Louise Doswald-Beck, Customary International Humanitarian Law, Vol. I (ICRC 2005).
  4. Protocol I, art. 48, June 8, 1977, 1125 U.N.T.S. 3.
  5. Id. art. 51(5)(b).
  6. Id. art. 57.
  7. U.S. Dep’t of Def., Directive 3000.09: Autonomy in Weapon Systems (2012, updated 2017).
  8. Ibid.
  9. Reuters, “Ukraine rushes to create AI-enabled war drones” (July 18, 2024)  .
  10. Reuters, “Auterion says it will provide Ukraine with 33,000 AI drone guidance kits” (July 28, 2025)  .
  11. Time, supra note 2.
  12. Protocol I, supra note 4, art. 48.
  13. Id., art. 51(5)(b).
  14. Id., art. 57.
  15. ICRC, “Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control” (2021).
  16. U.N. Secretariat, statements at CCW GGE (2023).
  17. Ibid.
  18. India Ministry of Defence, Defence Artificial Intelligence Strategy (2022).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top