AI And Legal Liability: Challenges Of Accountability In Autonomous Decision -Making

Published On: 2nd December 2025

Authored By: Mrunmayee Kulkarni

Abstract

As artificial intelligence takes over more and more jobs—from driving cars to helping doctors and lawyers—we’re facing a big question: Who is responsible when something goes wrong? Our current laws, built on the idea of humans making decisions, just aren’t keeping up.

This paper explores the tricky parts of holding AI accountable. It’s hard to figure out who to blame because AI can make decisions on its own, its inner workings are often a mystery (“black box”), and the responsibility is spread out among many people—the programmer, the company, and the user.

Ultimately, we argue that India needs to create specific laws for AI. These laws should require AI to be more transparent, make sure there’s mandatory insurance for when things go wrong, and, most importantly, protect people’s fundamental rights. Ensuring accountability isn’t just a legal puzzle; it’s a moral and ethical duty to make sure our justice system works for everyone in this new age of automation.

Keywords :  Artificial Intelligence (AI),  Legal Liability, Autonomous Decision-Making, Product Liability,  Tort Law,  Legal Personhood.

1. Introduction

1.1 Background and Significance

AI is everywhere now. It’s not just in sci-fi movies anymore; it’s driving our cars, helping doctors diagnose illnesses, and even assisting lawyers. This is exciting, and it’s making our lives more efficient. But it also raises a huge, thorny question: When an AI system messes up, who is to blame?

Our laws were written for a human world. They are based on the idea that a person made a choice, and that choice caused an accident. But AI is different. It can make its own decisions, and sometimes we don’t even know how it reached a certain conclusion. This is a big deal because if we can not figure out who is responsible, the whole system of justice breaks down. This paper is all about figuring out how to close that gap and make sure our legal system keeps up with technology.

About four years ago, the Supreme Court’s Artificial Intelligence Committee introduced a website called the Supreme Court Portal for Assistance in Courts Efficiency (SUPACE), which was its first interaction with AI. SUPACE was created, among other things, to provide the necessary digital infrastructure for the digitization of the legal system. Former Chief Justice of India S. A. Bobde, who served as the chairman of the AI committee at the time, emphasized the concerns about the use of AI in relation to judicial decision-making and the significance of an impartial judicial mind in the form of a judge. The Punjab & Haryana High Court most recently used ChatGPT, an artificial intelligence application, to decide a bail case.

The Supreme Court employed AI to transcribe its live-streamed proceedings during a hearing about the political power struggle in Maharashtra in February. In Chief Justice of India DY Chandrachud’s court, a screen was set up to show the live transcription of the proceedings.

1.2 Statement of the Problem

 The core issue is our current laws simply aren’t equipped for AI. The problem is what experts call the “responsibility gap.” If an autonomous car gets into an accident, is it the person who designed the software? The company that built the car? Or the person who was just a passenger?

On top of that, many advanced AI systems are like a “black box.” They work, but we can’t see inside them to understand their reasoning. This makes it almost impossible to prove what went wrong in a court of law. Without clear rules for who is accountable, victims can be left without a way to get compensation, and companies won’t know what standards they need to meet. It’s a mess, and we need to fix it.

1.3 Objectives and Scope

Our main goal is to understand what’s wrong with our current legal system when it comes to AI. To do that, we’re going to:

  • Look at why our current laws on things like negligence and product defects don’t work well for AI.
  • Explore new, big ideas being debated, like whether we should give AI its own legal status or force companies to make their AI more transparent.

We’re focusing specifically on the legal side of things—who is liable and how to prove it. We won’t be getting too deep into the technical details of how AI works or the broader ethical questions beyond what is relevant to the law.

1.4 Research Methodology

This study employs a doctrinal legal research methodology. This approach involves a systematic analysis of existing legal frameworks, including statutes, case law, and scholarly articles related to AI liability. 

2. Understanding AI and Its Legal Dimensions

   A huge paradigm shift in Indian law is currently taking place, mostly due to the growing use of artificial intelligence (AI). Artificial intelligence may be more effective, competent, and unbiased in the legal system. Nevertheless, due of disparities in conceptual framework, application scenario, and prospective competence, artificial intelligence cannot totally replace human judges. Legal professionals in India now have access to AI-driven tools like Manu Patra and SC Judgments that quickly and accurately comb through huge databases of legal documents, legislation, and precedents, replacing the time consuming human searches through volumes of legal texts. In addition to speeding up research procedures, this transformation improves the accuracy and thoroughness of legal information retrieval, giving legal professionals a significant advantage in their decision-making and case-building processes. However, there are issues with this technical change in Indian law, including worries about data privacy, security, openness in AI decision-making, and the possibility of algorithmic bias. Establishing a strong regulatory framework is therefore essential to ensuring appropriate AI inclusion in the Indian legal system.

2.1 Defining Artificial Intelligence

John McCarthy, referred to as the father of AI, has described it as “the science and engineering of making intelligent machines.”

At its simplest, Artificial Intelligence (AI) refers to computer systems that can perform tasks that typically require human intelligence. This isn’t one single thing, but a broad field that includes several types of technology. Machine learning, for example, is a type of AI where systems learn from data without being explicitly programmed for every scenario. This allows them to identify patterns, make predictions, and adapt their behaviour over time. Another type is deep learning, which uses complex, multi-layered neural networks to process data in a way that mimics the human brain. The key takeaway is that modern AI is dynamic and learns on its own, which is a major departure from the static software of the past.

2.2 Autonomous Systems and Decision-Making

An autonomous system is a machine or software that can operate and make decisions independently, without direct human input. A self-driving car is a perfect example: it uses sensors and AI to perceive its environment, predict the actions of other vehicles, and navigate its route all on its own. The decision-making process in these systems is often complex and goes beyond simple “if-then” logic. Instead, the AI makes probabilistic decisions—weighing different factors and choosing the option it calculates as most likely to succeed. This makes its actions highly unpredictable to an outside observer and can create situations where even the developers can’t fully explain the system’s choice.

2.3 Legal and Ethical Implications of AI Deployment

The ability of AI to learn and make decisions on its own introduces several serious legal and ethical challenges.

  • The “Black Box” Problem: Many advanced AI systems, particularly those using deep learning, are essentially a “black box.” We can see the input and the output, but we can’t trace the exact series of calculations that led to the decision. In a legal context, this is a nightmare. Proving negligence or a product defect often requires explaining why something went wrong. If no one can explain how the AI arrived at its decision, it becomes nearly impossible to hold anyone accountable.
  • Algorithmic Bias: AI systems are only as good as the data they’re trained on. If that data contains societal biases (e.g., historical hiring data that favours male applicants), the AI will learn and perpetuate those same biases. When an AI is used in critical areas like hiring, lending, or criminal justice, this can lead to discriminatory outcomes that violate fundamental rights and laws.
  • The Responsibility Gap: As mentioned earlier, AI blurs the lines of responsibility. In a traditional accident, a person or a company is clearly at fault. With AI, blame is diffused among multiple parties: the data provider, the programmer, the manufacturer, the integrator, and the user. The lack of a single, clear responsible party makes it difficult for victims to seek compensation and for the law to enforce justice.

3. The Big Problem: Old Laws vs. New Technology

3.1 Our Laws Were Made for People, Not Robots

Think of our legal system as a set of rules for how humans should behave. The whole idea of holding someone responsible is based on the assumption that a person or a company made a decision.

  • In civil law, like when someone sues for a car accident, we look for negligence. This means someone had a duty of care (like driving safely) and failed, causing harm.
  • In criminal law, we demand a guilty mind (mens rea)—the intent to commit a crime—and a criminal act (actus reus).

But what happens when an AI is involved? An autonomous car, for instance, makes its own decisions about steering, braking, and accelerating. If it gets into a crash, who’s to blame? Was it the carmaker who built it? The engineer who wrote the code? The person who “owned” the car but wasn’t even driving? Or the AI itself? The line of responsibility gets really blurry, really fast.

3.2 Why AI Breaks Our Legal Framework

AI’s ability to learn and adapt is amazing, but it also creates legal chaos. Here’s why our old laws don’t work so well for new tech:

  • You Can’t Predict the Unpredictable: For a negligence claim to work, you have to prove that a person should have foreseen the harm. But with AI, a system can learn and change so much that it might do something completely unexpected, making it impossible for a programmer to have foreseen the outcome.
  • The “Black Box” Mystery: To prove a cause-and-effect relationship, you need to show that a specific action led to a specific harm. However, a lot of modern AI is a “black box” — we can see what goes in and what comes out, but we can’t see the complex thought process in between. This makes proving causation incredibly difficult.
  • No Intent, No Crime: A robot can’t have a “guilty mind.” It can’t feel malice or intent. So, holding a machine criminally liable is impossible under our current laws.
  • Who Do You Blame? Victims of an AI-related incident face a huge challenge in simply figuring out who to sue. And the legal system struggles to divide the blame among all the parties involved.

4. When AI Makes a Mistake, Who Takes the Blame?

When an AI system does something wrong, our legal system often hits a wall. This section explores why it’s so hard to find a responsible party and what that means for both civil and criminal law.

4.1 The “Black Box” Problem 

Imagine you’re in a courtroom, and you need to explain why an accident happened. With a human, you can ask for their reasons. But with advanced AI, you can’t.

Many AI systems are like a “black box”: you can see what goes in and what comes out, but the internal decision-making process is a total mystery. It’s so complex that even the people who created the AI can’t always explain why it made a certain choice.

This is a huge problem in legal cases, which are all about transparency and tracing cause and effect. For example, if an AI misdiagnoses a patient’s tumor, how do you hold anyone accountable if no one can explain why the algorithm made that error?

4.2 The Hurdles of Holding Someone Liable 

AI’s “black box” nature creates serious challenges for both civil and criminal cases.

In civil law, it’s tough to prove who was negligent. Developers might argue they built the system carefully, while users might say they had no real control over the AI’s actions. This leaves a gap where no one is clearly at fault, making it difficult for victims to get compensation.

In criminal law, the situation is even more complicated. You can’t charge a person with a crime unless you can prove they had a guilty mind (mens rea) or the intent to do wrong. Since an AI can’t have intentions, you can’t prosecute it. And it can be unfair to prosecute a human operator if they had no control over the AI’s decision.

This leads to a phenomenon called the “moral crumple zone.” Just like a car’s crumple zone absorbs the force of a crash, the human operator often ends up taking the blame for a failure that was really caused by a flaw in the AI’s design or a systemic error.

5. Conclusion and Way Forward

5.1 Key Findings 

Our journey through this paper has made one thing very clear: our traditional legal system is struggling to keep up with the rapid pace of AI. We found that laws built on concepts like negligence and human intent simply don’t fit in a world where autonomous machines make their own decisions. The biggest roadblocks are the “black box” problem, where no one can explain how an AI made a decision, and the “moral crumple zone, where humans unfairly take the blame for a system’s failure. These issues create a huge “responsibility gap”, leaving victims with no clear path to justice.

5.2 Recommendations 

To fix this, we can’t just tweak old laws; we need a fresh start. Here’s what we suggest:

  • Create AI-Specific Laws for India: India needs to move past a reliance on old doctrines and create new legislation tailored for the unique challenges of AI. This new law should define what accountability means for autonomous systems.
  • Mandate Explainable AI (XAI): Companies should be legally required to build AI that isn’t a “black box.” The law should demand algorithmic transparency, so we can understand how decisions are made, making it possible to trace the cause of an error.
  • Introduce Mandatory Insurance: To protect people, companies that deploy high-risk AI systems (like self-driving cars or medical diagnostic tools) should be required to have mandatory liability insurance. This would ensure that victims are compensated even when the blame is hard to pinpoint.

6. Bibliography

  • Abbott, Ryan. “The Reasonable Robot: Artificial Intelligence and the Law.” George Washington Law Review, vol. 86, 2018, pp. 855-885.
  • Barfield, Woodrow. Cyber-Humans: Our Future with Robots. Stanford University Press, 2015.
  • Glessner, Madeline L. “The Moral Crumple Zone: A Case Study in Driverless Cars and Human-Centered Design.” Washington Law Review, vol. 96, 2021, pp. 1-25.
  • Gupta, Dr. Ankita Kumar, and Dr. Gunjan Malhotra Ahuja. “Reshaping the Indian Legal System with Artificial Intelligence.” Manu Patra, n.d.
  • Hallevy, Gabriel. “Liability for Crimes Involving Artificial Intelligence Systems.” Santa Clara Law Review, vol. 60, 2020, pp. 147-190.
  • Hindustan Times. “Live transcription of Supreme Court proceedings introduced.” 2023, available at: https://www.hindustantimes.com/india-news/live-transcription-of-supreme-court-proceedings-introduced101677004607162.html (last visited December 21, 2023).
  • The Indian Express. “CJI launches top court’s AI-driven research portal.” 2021, available at: https://indianexpress.com/article/india/cji-launches-top-courts-ai-driven-research-portal-7261821/ (last visited December 21, 2023).
  • Keeton, W. Page, et al. Prosser and Keeton on Torts. 5th ed., West Academic, 1984.
  • Kingston, John. “Artificial Intelligence and Legal Liability.” Research Handbook on the Law of Artificial Intelligence, edited by Woodrow Barfield and Ugo Pagallo, Edward Elgar, 2018, pp. 326-340.
  • McCarthy, John. “What Is Artificial Intelligence?.” Stanford University, 2007, available at: http://jmc.stanford.edu/articles/whatisai/whatisai.pdf.
  • Model Penal Code. American Law Institute, 1985.
  • Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015.
  • Restatement (Third) of Torts: Product Liability. American Law Institute, 1998.
  • The Times of India. “In a first, Punjab and Haryana high court uses Chat GPT to decide bail plea.” 28 March 2023.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top