Published On: October 4th 2025
Authored By: Rongala Jahnavi
Gitam Deemed To Be University
Abstract
International humanitarian law (IHL) faces hitherto unheard-of difficulties when artificial intelligence (AI) is used in armed conflict. The intricate connection between AI and IHL is examined in this essay. It focuses on precise targeting, cyber warfare, surveillance, and the deployment of autonomous weapons systems (AWS). It has been thought that using AI will improve accuracy and lessen collateral harm. The loss of human control over decision-making, the possibility of algorithmic bias, and the attribution issues are some of the difficulties, though. The fundamental tenets of IHL difference, proportionality, and humanity are in danger because of these problems. This article finds that AWS, which is a “third revolution in military affairs,” runs the potential of misidentifying targets due to ingrained biases in its programming, thereby breaking the principle of distinction. In a similar vein, the study finds that the application of AI to cyber operations raises questions regarding the proportionality of attacks and the challenges associated with assigning blame. These issues are examined in this article along with case study and comparative analysis of AI’s application in military operations. The study examines information gathered from primary and secondary sources using a qualitative research methodology. Through the examination of specific incidents, this study emphasizes the urgent necessity to reevaluate current legal frameworks in order to handle the unique difficulties brought about by developing technology. In the end, this paper aims to support the continuing discussion on creating the moral and legal frameworks required to control AI’s application in combat, guaranteeing adherence to IHL while upholding national security and state sovereignty.
Introduction
Internet users joked that ChatGPT, a generative chatbot, should win the literary prize in October 2024 when the Royal Swedish Academy was announcing the winners of the Nobel Prize, “for its intricate tapestry of prose which showcases the redundancy of sentience in art. This followed the news that contributions pertaining to artificial intelligence (AI) had won the Nobel awards in chemistry and physics. One of the newest and fastest-growing scientific and technological domains is artificial intelligence. The idea of artificial intelligence itself is not new. With the advancement of computers in the 1950s and 1960s, it came into being. However, the definition and use of AI have changed dramatically since the 1950s and 1960s. Known as the “Turing Test,” Alan Turing, a pioneer in the field of computers, devised an imitation game in 1950 that asked, “Can a machine think?” The purpose of this test was to see if an AI was intelligent enough to compete with humans. Turing projected that computers could be programmed to pass the test of human imitation by at least 30% by the year 2000.
AI is the automation of activities that we associate with human thinking, activities such as decision-making, problem-solving, learning. The capabilities of AI have since expanded much beyond these initial projections. Perhaps the first chatbot to pass the Turing Test was created in 2001 by programmers in Saint Petersburg. These days, artificial intelligence (AI) systems are capable of a wide range of tasks, such as identifying faces and objects, navigating roads and traffic to drive on their own, creating text and images, creating artificial voices, and creating music. Additionally, human capacities such as long-term planning, creativity, and the ability to simulate complicated concepts are improving. AI has developed from an experimental technology to a potent instrument in many fields, going well beyond the Turing Test.
Significant changes in the character of combat have occurred in tandem with advancements in technology. From the time of the Greek war gods Ares and Enyo, or their Roman transition into Mars and Bellona, to the present, there have been significant changes. According to etymology, the German word werran, which evolved into weorre and finally warre, the outdated spelling of war, replaced the Latin word bellum as the root source for the English word for war. Cave art from 10,000 years ago shows that men have been fighting in groups since the Stone Age. Nearly 5,000 years ago, the city of Uruk and Mesopotamia are said to have initiated offensive military battles and created a military defense system. Conflicts in the Indian subcontinent were depicted in the 2,000-year-old oriental epic, the Mahabharata.
Nearly two thousand years later, the First World War was a clear and horrifying departure from earlier wars. Along with the deployment of mobile infantry weapons like light mortars, Lewis guns, and rifle grenades, a more effective system of combined arms tactics including artillery, tanks, air, and infantry was devised. Since then, military tactics and the tools and techniques of conflict have developed more quickly than before. Examples of warfare throughout history include the ordered tactics used in the classical West and the close-fight tactics of ancient civilizations. Islamic combat and European chivalry both used these sophisticated tactics. The destructive world wars of the 20th century are thought to have been sparked by the advent of mechanized and total conflict in the industrial age.
The role of politics and information technology in combat has grown dramatically in the 21st century. As a result, drone warfare, cyber warfare, and remote killing have become commonplace. Armed drones and other unmanned or human-replacing weapon systems are becoming more prevalent in modern warfare. Legal, strategic, and ethical issues arise when AI is used in combat, especially when it comes to International Humanitarian Law (IHL). IHL’s fundamental tenets of distinction, proportionality, and humanity are particularly threatened by AI’s inherent drawbacks, which include algorithmic bias, a lack of human control, and accountability concerns. Therefore, in order to address ethical, strategic, and humanitarian concerns in the context of AI and autonomous weapon systems, legal frameworks need to be reevaluated.
Evolution of international Humanitarian Law
Cicero is frequently credited with coining the well-known Latin proverb inters arma leges quiet, which roughly translates to “in time of war the laws are silent. That since men first started fighting; there have been some variations of the rules of war, notwithstanding their softness, flexibility, and malleability. The idea that there should be regulations for war is not new. In the past, belligerents governed the conduct of war by private agreements and pacts. The nineteenth-century ICRC and Sir Henry Dunant’s initiative paved the way for the formation of contemporary legal frameworks, such as IHL. The four 1949 Geneva Conventions and the 1977 Additional Protocols serve as the primary codifications for this area of public international law today. Furthermore, it has been argued that a body of customary law has developed to regulate the disputes during the last few decades. Legal precepts that have evolved via continuous governmental practice and are widely regarded as obligatory are referred to as customary law. The entirety of IHL is a body of legislation designed to lessen the pain that armed conflicts inflict on people. Limiting conflict and protecting individuals who do not or no longer participate in hostilities are the main goals of international humanitarian law. It must strike a balance between this goal and another, which is to safeguard the military’s ability to carry out the armed conflict. Similar to how mechanization changed combat in the 20th century, the emergence of AI and AWS is predicted to do the same. Although it is generally acknowledged that AI may be used for surveillance and logistics, armed AI presents moral and legal concerns. Targeting decisions are still made by humans even when militaries deploy automated weaponry. As some countries get toward complete autonomy, AI systems will be able to make crucial military decisions without human input. At the end of the legal spectrum comes the international law of war.
Challenges of Autonomous Weapon System (AWS)
AWS presents serious operational, ethical, and legal difficulties, especially with regard to their ability to adhere to IHL norms. AWS is a robotic weapon system that, once activated, doesn’t require human assistance. They are able to choose and hit the targets independently. They are equipped with situational awareness, information processing, and decision-making capabilities by the presence of sensors, computers, and effectors. AWS “are not one or two types of weapons,” according to the Group of Governmental Experts on Emerging Technologies in the Area of Lethal AWS. Rather, they belong to the capability category, which is a weapon system that integrates autonomy into its essential operations, particularly target engagement and selection.
Firstly, the end of the world was averted during the Cold War when a Soviet officer manually overrode an AI-triggered nuclear launch alarm. A machine might not have made that choice. Second, Scharre’s sniper team encountered a little girl in Afghanistan who was actually a Taliban scout. He came to the conclusion that even if killing is legal in some situations, a robot would not understand that it may be unethical and wrong. The shortcomings of AWS in these situations as well as the indispensible human ability to understand context and morality. Notwithstanding the difficulties, modern robots are revolutionizing warfare in the same way that they are revolutionizing other sectors, like cleaners and self-driving vehicles. Military robots are receiving defense budget investments from numerous nations. According to Global Market Insights (2023), the amount spent globally on military robotics was $13.4 billion in 2022 and is projected to reach $30 billion in 2032. In the future, competition for unmanned aircraft’s greater speed and automation will resemble automated stock trading, according to the U.S. Air Force’s Unmanned Aircraft Systems Flight Plan (2009–2047). The expansion of AWS poses problems for the core IHL principles. More than anything else, the principles of difference, proportionality, and precautions have been contested, as has Artificial Intelligence in Armed Conflict: International Views. Priority must be given to this issue. Only legal targets should be assaulted, according to the rule of distinction. It makes a distinction between civilian and military goods. But in the case Scharre provided above, the young girl scouting for the Taliban was, in theory, a valid target. AWS is put to the test in the real world, when the mechanical distinction is superseded by the humanistic standards of differentiation. In a similar vein, the proportionality principle stipulates that any collateral or incidental harm done to non-combatant civilians must not be disproportionate to the military benefits gained. Thirdly, the principle of precaution necessitates the implementation of practical measures to safeguard civilians. In an effort to strike a fair balance between humanitarian needs and the exigencies of war, the application of these IHL principles has become more important than ever with the development of AWS.
In a similar vein, the proportionality principle stipulates that any collateral or incidental harm done to non-combatant civilians must not be disproportionate to the military benefits gained. Thirdly, the principle of precaution necessitates the implementation of practical measures to safeguard civilians. In an effort to strike a fair balance between humanitarian needs and the exigencies of war, the application of these IHL principles has become more important than ever with the development of AWS. One of the earliest military conflicts to employ killer robots was the War in Ukraine. Based on an analysis of these real-world scenarios, it is determined that limited autonomous targeting may be possible in isolated, predictable locations, but human supervision is essential. To increase the safe usage of autonomous attack systems, the parties must guarantee dependable monitoring and override capabilities using cutting-edge technologies.
AI in Cyber warfare
Cyber warfare is the term used to describe military operations that primarily use computer networks and systems to target the adversary’s. Even before AWS was implemented, artificial intelligence (AI) had been used as a cyber weapon in real life for a considerable amount of time. The application of AI in cyber warfare is a separate but related idea to the employment of AWS in contemporary warfare. Within their respective operating domains, they differ from one another. AWS are physical systems that can carry out real-world tasks on their own, as was covered in the preceding section. However, by altering or upsetting networks, cyber warfare has an impact on online environments. They are related because both technologies rely significantly on intricate, opaque software, which makes it more difficult to attribute attacks and hold people accountable under IHL. Like AWS, cyber warfare creates new difficulties for the IHL’s distinction and proportionality standards. Artificial intelligence is not always used in cyber warfare. As previously mentioned, there have been cases of cyber attacks long before the emergence of contemporary AI. Cyber warfare can also be carried out manually, much like conventional battleground warfare. But these days, cyber-relevant tactics depend more and more on artificial intelligence. And predetermined action items, like a situational awareness that evaluates contested cyberspace in real time and computing speed in execution. One of the first and most prominent cyber attacks occurred in Estonia in April 2007. The banking system, numerous government institutions, and a large portion of the media were brought to a complete stop by a protracted series of distributed denial of service (DDoS) attacks. Due to the advancement of cyber technology and its application in combat, the US Department of Defense now views cyberspace as a new theater of conflict that is open to both offensive and defensive military actions. By providing automated attack capabilities and responsive defenses, artificial intelligence is transforming cyber warfare. While AI systems may evaluate defenders’ actions in real time and produce dynamic reactions, automated cyber defenses manage cyber threats through ongoing, adaptive processes. Large databases from worldwide cyber activity fuel this adaptability, which makes AI-driven cyber attacks more nimble and challenging to stop. Social media and artificial intelligence (AI) have developed into a potent weapon for subtly influencing civilians for military gains. Gaining an advantage in cyber operations requires the use of artificial intelligence (AI), which provides a significant edge in network security and penetration due to the abundance of data accessible.
The Function of AI in Surveillance and Accurate Targeting
The ability of a machine to observe, evaluate, and act more rapidly and correctly than a person constitutes a competitive advantage in any field–civilian or military,” according to the U.S. National Security Commission on AI report from 2021. AI technology will provide businesses and nations that use them with a great deal of power. Modern warfare has changed as a result of AI, especially in the areas of precision targeting and monitoring. International Views on Artificial Intelligence in Armed Conflicts. The Algorithms are used by AI-enhanced weapon systems to quickly and accurately detect, track, and engage targets. On battlefields, these systems which include facial and image recognition are essential for obtaining intelligence and conducting real-time surveillance. Concerns are raised, nevertheless, about adherence to the proportionality and distinction norms of international humanitarian law (IHL). IHL cannot be applied to Offensive Lethal Autonomous Robots (OLARs) since they lack human discretion and judgment. Both military and non-state actors can take advantage of AI technology without ethical concerns thanks to the trend toward smaller, portable devices with advanced sensors and target recognition. By precisely interacting with military targets, AI may be able to minimize collateral damage. This is achieved by combining machine-learning techniques with swarms of robotic equipment.
The speed and effectiveness of AI-enabled systems are demonstrated by recent combat scenarios, such as Russia’s immediate reconnaissance-to-attack response during the 2014 crisis in Ukraine. Additionally, by improving targeting precision, AI can help human decision-making, thereby safeguarding civilians and lowering civilian casualties. However, the implementation of these principles depends on the successful integration of AI systems that adhere to International Humanitarian Law (IHL) norms.
China is modernizing command and control by using AI to improve battlefield decision-making speed and precision. An early example of AI-led warfare is the People’s Liberation Army, which uses AI for data fusion and predictive planning. Traditional tactical doctrines are being reframed by AI-enabled defenses, which prioritize cutting-edge defense strategies in an AI-rich environment. Although near-pixel-perfect accuracy is made possible by sophisticated missile systems with deep reinforcement learning algorithms, worries about diminished human control and dependability still exist. Because AI uses pre-existing data, it may make disastrous mistakes.
In actuality, the use of AI in military projects demonstrates how revolutionary it may be in combat. For example, the U.S. Navy’s LOCUST project and China’s “intelligentized” cruise missiles show that advancements in autonomous high-precision weaponry are unavoidable. The use of AI in distributed sensing and continuous tracking is further demonstrated by the U.S. Marine “warbot companies” and the U.S. Defense Advanced Research Projects Agency’s unmanned vessel program. Meanwhile, the U.S. “Loyal Wingman” program and Russia’s quick-reaction model in Ukraine demonstrate the tactical benefits of AI-assisted rapid targeting. These implementations, however, make supervision of AI-driven decision-making even more necessary. When combined, AI promises improved military uses in terms of monitoring and accuracy, but its ethical use will require rigorous oversight. The IHL requirement for crucial human judgment must be balanced with AI’s ability to target precise targets.
National Security and AI
Discussions over cyber sovereignty, moral ramifications, and maintaining state sovereignty in an increasingly digital conflict have heated up as a result of the incorporation of AI and AWS into national security frameworks. Jeremy Wright, the attorney general of the United Kingdom, questioned in 2018 whether international law specifically forbids abuses of territorial sovereignty by unauthorized cyber activities. The United Kingdom does not think that there is a special regulation pertaining to cyber sovereignty. Rather, it refers to Artificial Intelligence in Armed Conflict: International Views. Cyber activities are only prohibited by the U.N. Charter if they amount to an unlawful intervention or the use of force against another state. On the other hand, nations including as France, the Netherlands, Austria, and NATO members aside from the United Kingdom claim that unapproved cyber activities can infringe upon national sovereignty. Debate surrounds cyber sovereignty, with some contending that functional loss or bodily harm is a violation. However, active and offensive cyber measures can violate the sovereignty of another state, although passive protection is usually permissible. Because AI algorithms may integrate across apps and enhance the “Internet of Things,” they are essential in military systems for national security. Because AI is dual-purpose, it can be used in both military and civilian settings.
Since AI is a transparent technology, it is frequently integrated into other technologies and is not obvious in commonplace goods. Global powers like the United States, China, and Russia deploy cyber security innovations and AI-driven weapons to demonstrate their dominance in geopolitical confrontations and to show their national might. The United States emphasizes the transition to a constant, digital battlefield by defining AWS as automated systems and giving AI top priority in protection against everyday cyber attacks. Conversely, China positions itself as an assertive global power by using AWS as a symbol of its growing AI superiority. It has created sophisticated systems, including AI-guided missile technology. Both countries convey military might and patriotism using AWS as “geopolitical signifiers” that represent their ideas of world order. In the meantime, Russia has used artificial intelligence (AI) in cyber security to influence global political events through strategies including social media manipulation.
It’s interesting to note that some contend that the development of AI in military systems has threatened state sovereignty by giving non-state actors access to hitherto unattainable forms of influence and power. Simultaneously, the incorporation of AI has improved the monitoring capabilities of states to improve their protection of their citizens, hence strengthening national security. National security is now reliant on these quickly evolving technologies as these nations seek to establish supremacy through the development of strong AI systems.
Conclusion
Modern warfare is undergoing a radical change as a result of the increasing application of AI in combat. Although it offers operational benefits like accuracy and speed, it also presents IHL with extensive ethical and legal issues. The use of AWS and AI-powered weapons must adhere to the basic IHL criteria of proportionality and distinction even though they are not expressly forbidden. The current legal framework established by the Geneva Conventions forbids indiscriminate or disproportionate attacks and demands a difference between combatants and civilians. Additionally, it mandates that states evaluate each new weapon for compliance with IHL. However, current methods are insufficient to examine AI’s self-governing decision-making. Laws that are especially suited to the dangers posed by AI and AWS must be established. A regulatory reaction might be the creation of a new treaty, modifications to the UN Convention on Certain Conventional Weapons (CCW), or a new protocol under the Geneva Conventions. Regarding state accountability for AI warfare, the International Court of Justice may potentially render an advisory opinion. Accountability and attribution are two legal conundrums in AI-infused combat that are insufficient for current systems. It is necessary to establish new accountability systems, such as AI war crimes tribunals or state responsibility frameworks. Significant human control measures are necessary because autonomous targeting carries the potential of unintentional escalation, misidentification, and systemic violations of International Humanitarian Law (IHL). Both international stability and state sovereignty are at risk from the unchecked application of AI-driven weapons technology. It provides hitherto unheard-of benefits in asymmetric warfare, automated retribution systems, and cyber warfare. To guarantee that AI systems used in combat adhere to the principles of International Humanitarian Law (IHL) and state accountability, the international community must collaborate to create legally binding standards.
References
- ABP News Bureau. (2024, October 10). Nobel Prize In Literature: Here’s What Internet Thinks Who Should Be Winner In 2024 – Answer Will Leave You In Splits. ABP Live. https://news.abplive.com/trending/nobel-prize-literature-2024-internet-predictions-winner-opinions-1723397
- Akerson, D. (2013). The illegality of offensive lethal autonomy. In D. Saxon (Ed.), International Humanitarian Law and the Changing Technology of War (pp. 65-98). Martinus Nijhoff Publishers
- Amoroso, , Sauer, F., Sharkey, N., Suchman, L., & Tamburrini, G. (2018). Autonomy in weapon systems: The military application of artificial intelligence as a litmus test for Germany’s new foreign and security policy. Heinrich Boll Foundation.
- Johnson, J. (2020). Artificial intelligence, drone swarming and escalation risks in future warfare. The RUSI Journal, 165(2), 26-36. https://doi.org/10.1080/03071847.2020.1752026
- Kallberg, J., & Cook, T. S. (2017). The unfitness of traditional military thinking in cyber. IEEE, 5, 8126-8130. https://doi.org/10.1109/ACCESS.2017.2693260
- Kilovaty, I. (2018). Doxfare: Politically motivated leaks and the future of the norm on non-intervention in the era of weaponized information. Harvard National Security Journal, 9, 146-179. https://ssrn.com/abstract=2945128