AI in Armed Conflict: The Application of International Humanitarian Law to Autonomous Weapons Systems

Published on 26th July 2025

Authored By: Manasvi Joshi
Symbiosis Law School, Hyderabad

Abstract

Using artificial intelligence in modern warfare has totally changed the way armies operate. It also brings up big questions about how international laws meant to protect people, like humanitarian rules, apply to autonomous weapons that make decisions on their own. This article explores the legal challenges of lethal autonomous weapons systems (LAWS) concerning IHL principles like distinction, proportionality, and precaution.

The article argues that existing IHL frameworks must be reinterpreted and supplemented to address the challenges of machine decision-making. We need countries to work together quickly to set clear rules so that autonomous weapons follow humanitarian laws and still keep humans involved in making life-or-death choices.

Introduction

The quick progress of artificial intelligence has really changed how wars happen since now machines can decide when to use deadly force. Autonomous weapons, such as Israel’s AI systems “Gospel” and “Lavender,” bring up serious questions about how they fit within international humanitarian law. IHL, established by the Geneva Conventions, was designed for human combatants and faces challenges with the rise of autonomous systems. Key principles – distinguishing between combatants and civilians, ensuring proportionality, and taking precautions – depend on human judgment. When algorithms make decisions, questions arise about legal compliance, accountability for mistakes, and the maintenance of human dignity in war.

This article looks at how international humanitarian law (IHL) connects with autonomous weapons. It emphasizes some main problems: AI struggles to understand the full situation, there’s no clear way to hold anyone responsible when mistakes happen, and humans have less control over how the force is used. While IHL principles remain important, we should consider new ways to apply them and create fresh legal tools to better keep both civilians and fighters safe.

The Legal Framework: IHL Principles and Autonomous Weapons

I. THE PRINCIPLE OF DISTINCTION

The principle of distinction is a core tenet of international humanitarian law, requiring parties in armed conflict to differentiate between combatants and civilians. As outlined in Article 48 of Additional Protocol I to the Geneva Conventions, military operations must target only legitimate military objectives. This idea makes it really tough for autonomous weapons to decide what to do.

These systems must accurately identify military targets while protecting civilians, but current AI technology has limitations in visual recognition and understanding the complexities of modern conflicts. Figuring out if someone is an armed person or just a civilian depends on things like whether they’re involved in fighting and if they’re protected from attack as civilians.

The International Committee of the Red Cross emphasizes that context-specific legal judgments are necessary to determine military actions, taking into account the environment and civilian presence. Despite advancements in AI, accurately interpreting this complex information remains challenging, particularly in urban warfare, increasing the risk of misidentification.

II. PROPORTIONALITY IN AUTONOMOUS TARGETING

The principle of proportionality, outlined in Article 51(5)(b) of Additional Protocol I, prohibits attacks that cause excessive civilian harm relative to the anticipated military advantage. This principle relies on human judgment, raising questions about whether autonomous weapons can make such value-based assessments.

Proportionality assessments require commanders to balance military necessity with humanitarian concerns, considering both immediate tactical gains and their broader implications for civilians. While the “reasonable commander” standard assumes moral reasoning, autonomous systems operate on fixed algorithms and lack the moral flexibility needed for effective analysis.

Recent developments in AI-assisted targeting, especially in Gaza, illustrate the potential and limitations of algorithmic proportionality assessments. Systems like “Lavender” assign numerical scores to targets, which may oversimplify complex ethical decisions and overlook humanitarian considerations. AI systems can generate target assessments in a week, compared to 250 days for human analysis, potentially compromising traditional safeguards for the sake of efficiency.

III. PRECAUTIONARY MEASURES AND HUMAN AGENCY

The principle of precaution in attacks, outlined in Article 57 of Additional Protocol I, mandates that those planning attacks take all feasible steps to minimize civilian harm. This involves checking the targets, figuring out if civilians are around, deciding when to strike, and picking ways to keep everyone safe from unintended harm. While autonomous weapon systems offer opportunities to enhance these precautions by processing data and executing precise operations, they also present challenges that require human judgment and ethical decision-making.

Using autonomous systems effectively means humans should have real control. People need to be able to understand what’s happening, decide whether to act and step in if necessary. People are still the ones responsible for making sure International Humanitarian Law (IHL) is followed, even when technology helps with military stuff.

Accountability Gaps and Responsibility Challenges

I. THE PROBLEM OF DISTRIBUTED RESPONSIBILITY

The deployment of autonomous weapon systems complicates responsibility and challenges traditional accountability in armed conflict. Unlike conventional weapons that have clear chains of command, autonomous systems involve multiple actors in their development and operation, leading to “accountability gaps” where violations occur without clear responsibility.

The Rome Statute of the International Criminal Court assumes human agency in war crimes. However, when autonomous systems cause unlawful harm, it raises questions about whether responsibility lies with commanders, programmers, or operators. Article 28, which addresses command responsibility, becomes complicated with autonomous systems that can act unpredictably, often beyond human control.

A big challenge is the delay between when people make decisions and when autonomous systems carry them out. Commanders may not know how autonomous weapons make subsequent targeting decisions, complicating the link between authorization and machine actions. Machine learning systems can be unpredictable sometimes, and they might end up doing things their creators never specifically programmed them to do.

II. CORPORATE AND STATE RESPONSIBILITY

Autonomous weapons raise critical questions about corporate and state responsibility for violations of International Humanitarian Law (IHL). Their development relies on a web of supply chains that include defence contractors and tech companies. Looking at design choices is really important when these systems end up causing illegal harm.

Corporate liability gets tricky because a lot of AI tools can be used for different things, some good and some bad. Companies that develop algorithms for civilian applications may find them unexpectedly used for military purposes, creating unresolved accountability issues in international law. Some scholars argue for strict liability standards to hold manufacturers accountable for foreseeable misuse, while others caution this could hinder beneficial technological advancements.

State responsibility is really important when it comes to filling in the gaps where accountability is missing. States must ensure their armed forces comply with IHL, including when using autonomous systems. Article 36 of Additional Protocol I says we need to review new weapons legally before using them. But as machine learning keeps changing and advancing, it gets trickier to assess these kinds of weapons properly.

III. VICTIMS’ RIGHTS AND REMEDIES

The accountability gaps created by autonomous weapons systems have serious implications for victims’ rights. International human rights law guarantees victims the right to justice, reparations, and truth about their harm. However, enforcing these rights is complicated by unclear responsibility and opaque algorithmic decision-making.

The complexity of algorithms creates challenges for victims seeking to understand why they were targeted and for legal systems determining if violations occurred. To put it simply, good solutions need to clearly know who’s responsible and have ways to fix things if they go wrong. When it comes to autonomous systems, figuring out who’s responsible can get tricky – sometimes there aren’t obvious parties to blame. That’s why we need new legal rules that understand the special problems that come with AI in war.

Contemporary Applications and Case Studies

I. AI WARFARE IN GAZA: THE GOSPEL AND LAVENDER SYSTEMS

The conflict in Gaza, which began on October 7, 2023, features the large-scale use of AI-assisted targeting by the Israeli Défense Forces (IDF) through systems known as “The Gospel,” “Lavender,” and “Where’s Daddy?” These systems help improve how targets are identified, but they also bring up serious worries about human safety and ethics.

The “Gospel” system compiles target lists for buildings suspected of housing militants in just a week, prompting worries about the effectiveness of human review and compliance with International Humanitarian Law (IHL). The “Lavender” system identifies suspected members of Hamas based on behavioural scoring, raising the risk of false positives that could endanger civilians. The “Where’s Daddy?” system tracks militants to their homes, blurring the line between combatants and civilians and complicating proportionality.

II. AUTONOMOUS SYSTEMS IN THE UKRAINE CONFLICT

The war in Ukraine has shown how autonomous weapons like the Shahed-136 drone can act on their own. This raise worries about how much control humans really have over these devices. Deployed in swarms, these munitions can damage civilian infrastructure and challenge traditional command responsibilities, complicating military leaders’ obligations under international law.

III. IMPLICATIONS FOR IHL COMPLIANCE

The way AI is being used in Gaza and Ukraine shows that there are still big holes in International Humanitarian Law when it comes to autonomous weapons. The quick rollout of these systems skips over usual safety checks, and it seems like causing civilian harm is sometimes seen as a trade-off for getting things done faster. As AI advances, discrepancies between design specifications and operational realities challenge legal reviews and accountability for civilian casualties.

The Path Forward: Legal and Policy Recommendations

I. STRENGTHENING INTERNATIONAL LEGAL FRAMEWORKS

The challenges posed by autonomous weapons systems to international humanitarian law (IHL) require immediate guidance and long-term treaty development. Existing laws need to be clearer when it comes to the special challenges of warfare involving AI. The international community must interpret how IHL principles apply to these systems while preserving their protective purpose.

A protocol to the Convention on Certain Conventional Weapons (CCW) targeting autonomous weapons could establish binding international norms. This protocol should prohibit fully autonomous weapons operating without meaningful human control and create standards for human oversight and mission abort capabilities.

Enhancing legal reviews under Article 36 of Additional Protocol I is necessary to address the specific challenges of autonomous systems. States should develop guidelines for assessing AI-enabled weapons and employing interdisciplinary teams for a comprehensive evaluation of IHL compliance.

Standards for meaningful human control should specify how humans are involved in making decisions about using deadly force. Autonomous systems need to be watched carefully, with clear rules in place for when they’re used and how they’re managed. There should always be proper approval and supervision to make sure they’re working safely.

II. ADDRESSING ACCOUNTABILITY MECHANISMS

New rules for autonomous weapons are really important to make it clear who’s responsible and to make sure victims get help. Strict rules about liability might be necessary, even if it’s tough to prove exactly what caused the problem.

International criminal law may require adaptations to create new war crime categories related to autonomous weapons and clarify command responsibility for their use. Compensation plans, like requiring developers to have insurance and setting up international funds, should be put in place to make sure victims get the help they need.

Creating a group to watch over autonomous weapons systems is really important. It can help make sure countries follow the rules, look into any problems, and offer technical support when needed, especially since dealing with these systems can be pretty complicated when it comes to international humanitarian laws.

Conclusion

Bringing artificial intelligence into war raises big questions for international humanitarian law, which was made for traditional fighting and not for machines making decisions on the battlefield. Principles like distinction, proportionality, and precaution are still important, but they need to be rethought carefully when it comes to autonomous weapons. Gaps in accountability and diminished human control over lethal decisions threaten the protective nature of humanitarian law.

Recent fights in Gaza and Ukraine show how AI is changing the way wars happen, and it’s affecting the civilians caught in the middle. The speed at which autonomous systems operate can undermine traditional safeguards for civilians, and there is evidence of a growing tolerance for civilian casualties in AI-assisted operations.

Addressing these issues demands urgent international cooperation to establish binding legal norms that ensure compliance with IHL and uphold meaningful human control over life-and-death decisions. This covers quick advice and future agreements to handle the tricky parts of AI-driven warfare. It’s really important to establish clear rules for human oversight and find new ways to hold people accountable to keep humanitarian standards safe and respected. Taking quick action is really important to stop the safeguards put in place over hundreds of years from falling apart.

References

[1] “Lethal Autonomous Weapon Systems (LAWS) – UNODA” <https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/>

[2] “Meaningful Human Control – Autonomous Weapon Systems between Regulation and Reflexion” (March 7, 2025) <https://meaningfulhumancontrol.de/en/home/>

[3] Adamson L and Zamani M, “How Meaningful Is ‘Meaningful Human Control’ in LAWS Regulation?” Lieber Institute West Point (March 26, 2025) <https://lieber.westpoint.edu/how-meaningful-is-meaningful-human-control-laws-regulation/>

[4] Boulanin V, Goussac N and Bruun L, “Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human–Machine Interaction” (SIPRI) <https://www.sipri.org/publications/2021/policy-reports/autonomous-weapon-systems-and-international-humanitarian-law-identifying-limits-and-required-type>

[5] Crootof R, “Autonomous Weapon Systems and Proportionality” [2015] Völkerrechtsblog <https://voelkerrechtsblog.org/autonomous-weapon-systems-and-proportionality/>

[6] Davison N and International Committee of the Red Cross, “Autonomous Weapon Systems under International Humanitarian Law,” vol No. 30 (2018) <https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_under_international_humanitarian_law.pdf>

[7] Docherty B, “A Hazard to Human Rights” (2025) <https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making>

[8] Goussac N and Pacholska M, “The Interpretation and Application of International Humanitarian Law in Relation to Lethal Autonomous Weapon Systems” (UNIDIR → Building a More Secure World., March 6, 2025) <https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems/>

[9] Law RO, “The Legal Framework Governing Artificial Intelligence in Warfare: Challenges and Opportunities” (Record of Law, April 18, 2025) <https://recordoflaw.in/the-legal-framework-governing-artificial-intelligence-in-warfare-challenges-and-opportunities/>

[10] Musco Eklund A, “Meaningful Human Control of Autonomous Weapon Systems” (FOI 2020) report FOI-R–4928–SE <https://www.fcas-forum.eu/publications/Meaningful-Human-Control-of-Autonomous-Weapon-Systems-Eklund.pdf>

[11] Sawczyszyn J, “Lethal Autonomous Weapon Systems (LAWS): Accountability, Collateral Damage, and the Inadequacies of International Law” (Temple iLIT, November 12, 2024) <https://law.temple.edu/ilit/lethal-autonomous-weapon-systems-laws-accountability-collateral-damage-and-the-inadequacies-of-international-law/>

[12] Serhan Y, “How Israel Uses AI in Gaza—And What It Might Mean for the Future of Warfare” TIME (December 18, 2024) <https://time.com/7202584/gaza-ukraine-ai-warfare/>

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top