Published on: 4th December 2025
Authored by: Aditi Khare
Dr. D. Y. Patil College of Law, University of Mumbai
INTRODUCTION
The emergence of AI has fundamentally transformed the nature of warfare. Militaries across the world are increasingly incorporating AI technologies into command systems, surveillance operations, and weaponry, leading to the development of autonomous weapons systems (AWS), machines capable of independently identifying, selecting, and engaging targets without direct human control. While such innovations promise greater precision, speed, and operational efficiency, they also pose serious legal, ethical, and humanitarian challenges. The delegation of life-and-death decisions to machines raises questions about compliance with the established norms of International Humanitarian Law (IHL), which governs conduct during armed conflict to limit human suffering.
IHL, grounded in the Geneva Conventions of 1949[1] and their Additional Protocols[2], sets out key principles including distinction, proportionality, military necessity, and humanity. These principles were designed for human decision-makers, not autonomous systems operating on algorithmic logic. As a result, applying IHL to AI-driven weapons is not straightforward. Issues such as target identification, proportionality assessment, and accountability for wrongful harm become increasingly complex when autonomous systems act with limited or no human supervision. Moreover, the adaptive nature of AI, particularly in machine learning models that evolve based on data inputs challenges traditional notions of weapons review and legal predictability, as required under Article 36 of Additional Protocol I (1977).
This article seeks to analyze the application of International Humanitarian Law to autonomous weapons systems, assessing whether existing legal principles are adequate to regulate their use. It further explores the ethical implications of delegating lethal authority to machines and examines international efforts, including ongoing debates within the United Nations Convention on Certain Conventional Weapons (CCW), to establish norms or prohibitions for such technologies. Ultimately, this study aims to highlight the urgent need for renewed legal interpretation and global consensus to ensure that the integration of AI into armed conflict remains consistent with the humanitarian objectives of international law.
WHAT IS AI?
AI involves the use of computer system to carry out tasks that would normally require human cognition, planning or reasoning. A well-known example of such AI system is ChatGPT, Alexa and many other. The foundation of AI systems are algorithms. A traditional algorithm is a set of instruction or rules that a computer or machine must use to provide a response to a question or solve a problem. AI works by using large amounts of data and advanced algorithms to identify patterns and make predictions. It learns from examples, just as humans learn from experience. For instance, an AI system can be trained to recognize faces, translate languages, or recommend movies by analyzing previous data. AI is transforming almost every field, from education and entertainment to healthcare and warfare.
In conclusion, Artificial Intelligence is one of the most significant technological advancements of our time. It continues to evolve, offering immense benefits while also raising important questions about ethics, privacy, security and future of human work.
THE LEGAL FRAMEWORK: INTERNATIONAL HUMANITARIAN LAW (IHL)
International Humanitarian Law, often referred to as the law of armed conflict, governs conduct during warfare to limit human suffering. Rooted in the Geneva Conventions of 1949 and their Additional Protocols, IHL seeks to balance military necessity with humanitarian considerations. Its key principles are:
- Distinction- Parties must distinguish between combatants and civilians, targeting only legitimate military objectives.
- Proportionality- Attacks must not cause excessive civilian harm relative to the anticipated military advantage.
- Military Necessity- Force must be limited to what is necessary to achieve a legitimate military objective.
- Humanity- Prohibits means and methods of warfare that cause necessary suffering.
Additionally, Article 36 of Additional Protocol 1 (1977)[3] requires states to conduct a legal review of any new weapon to ensure compliance with IHL before its deployment.
HOW AI IS BEING DEPLOYED IN ARMED CONFLICTS?
Armed forces invest heavily in AI such as deploying AI on the battlefield to inform military operations or as part of weapon systems. The International Committee of the Red Cross (ICRC) has highlighted areas in which AI is being developed for use by armed actors in warfare, which raise significant questions from a humanitarian perspective[4]. These areas are:
- Integration in weapon systems, particularly autonomous weapon system
- Use in cyber and information operations
- Underpinning military decision support systems
Autonomous weapon systems have received the most attention when it comes to the use of AI for the military purposes, it is not a futuristic concept, they already exist in varying degrees of autonomy. In Russia-Ukraine war, Ukrainian forces are deploying AI-enabled drones that can identify targets and navigate terrain. Russia has fielded its own AI-based systems, including the Abzats ani-drone system that detects and disrupts Ukrainian drone frequencies. In the Israel-Hamas conflict, autonomous and AI-enabled systems play a similarly prominent role: the Iron Dome missile-defence system detects, identifies, and intercepts threats without human intervention, and AI tools, such as Habsora, which uses machine learning to flag suspected Hamas operations based on pattern-of-life analysis- support targeting decisions.
APPLICATION OF IHL TO AUTONOMOUS WEAPONS
1. Principle of Distinction
One of the central challenges in applying IHL to AWS lies in ensuring accurate target discrimination. AI systems rely on data and algorithms, which may be biased, incomplete, or manipulated. Identifying combatants from civilians in complex environments, for example, in urban warfare or among civilian populations, required is context and moral judgement that machines currently lack. A malfunction or data error could result in unlawful attacks, violating the principle of distinction. UNIDIR has also warned: “If an AI system has not encountered a certain scenario in training data, it may respond unpredictably in the real world … biased algorithms might misidentify civilians as combatants.”[5]
2. Principle of Proportionality
The proportionality rule requires commanders to weigh expected military gains against potential civilian harm. Translating such nuanced human judgement into algorithmic decision-making is problematic. AI systems may not be capable of evaluating proportionality in moral or contextual terms, and their decision-making processes are often opaque (the so-called black box problem). Consequently, ensuring compliance with proportionality remains a major legal uncertainty.
3. Principle of Accountability
Accountability is a cornerstone of IHL and international criminal law. Yet, autonomous weapons obscure the chain of responsibility. If an autonomous system commits a war crime- say, mistakenly attacking civilians, who is responsible? Possible candidates include the commander who deployed the system, the programmer who coded it, or the manufacturer who designed it. The absence of clear attribution threatens the principle of individual accountability enshrined in the Rome Statute of the International Criminal Court (1998)[6].
4. Requirement of Article 36 review
Article 36 obligates states to ensure that new weapons comply with IHL. However, AI-driven systems evolve through machine learning, meaning their behaviour may change after deployment. This continuous adaptability complicates traditional weapons reviews, which are typically static. States must therefore develop dynamic review mechanisms that consider how AI systems might learn or behave in unforeseen ways.
ETHICAL AND HUMANITARIAN CONCERNS
Beyond legal obligations, the use of autonomous weapons raises profound ethical questions. Many scholars argue that delegating lethal decisions to machines undermines human dignity and moral agency. The concept of “meaningful human control” has emerged as a guiding standard, suggesting that humans must always retain sufficient control over critical decisions, particularly those involving lethal force.
INTERNATIONAL STANCE
The international community remains divided on how to regulate AI and autonomy in warfare. Discussions within the United Nations Convention on Certain Conventional Weapons (CCW) have highlighted two main camps:
- Prohibition advocates (e.g., Austria, Brazil, and several NGOs) call for a pre-emptive ban on “lethal autonomous weapons systems” (LAWS), arguing that they are inherently incompatible with IHL.
- Regulation proponents (e.g., the United States, Russia, and China) oppose a ban, favouring national-level regulations and technological safeguards instead.
CONTEMPORARY EXAMPLES
AI applications are already influencing conflicts:
- Ukraine-Russia War: Both sides have employed AI-enabled drones for surveillance, targeting, and cyber operations.
- Israel’s “Gospel” AI System: Used to generate real-time targeting data, raising debates about algorithmic responsibility.
- US Project Maven: Integrates AI to analyze battlefield imagery, exemplifying how human-machine collaboration may evolve in modern militaries
These examples illustrates that AI is not merely theoretical but actively shaping warfare, often ahead of clear legal frameworks.
CONCLUSION
AI-driven autonomous weapons represent a defining challenge for contemporary International Humanitarian Law. While existing IHL principles- distinction, proportionality, necessity, and humanity remain applicable, their implementation becomes complex when decision-making shifts from humans to machines. The legal and ethical gaps surrounding accountability, human control, and unpredictability underscore the urgency for international consensus. Ultimately, technology must remain a tool guided by human conscience, not a substitute for it. Upholding the humanitarian spirit of IHL in the age of AI will determine whether future warfare respects the values of humanity that the law of armed conflict was built to protect.
[1] Geneva Conventions (I–IV) (adopted 12 August 1949, entered into force 21 October 1950) 75 UNTS 31.
[2] Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (adopted 8 June 1977, entered into force 7 December 1978) 1125 UNTS 3.
[3] Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I) art. 36, June 8, 1977, 1125 U.N.T.S. 3.
[4] Int’l Comm. of the Red Cross, What You Need to Know About Artificial Intelligence in Armed Conflict (Oct. 6, 2023), https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict
[5] UNIDIR, AI in the Military Domain: Implications for International Peace & Security (2025), https://unidir.org/wp-content/uploads/2025/07/UNIDIR_AI_military_domain_implications_international_peace_security.pdf
[6] Rome Statute of the International Criminal Court, July 17, 1998, 2187 U.N.T.S. 90.



