AI and Legal Liability- Challenges of Accountability in Autonomous Decision-Making

Published on 26th July 2025

Authored By: Parth Attry
UILS Chandigarh University​

ABSTRACT

In The Modern world, Artificial Intelligence (AI) has become an integral part of many businesses, revolutionizing various industries by enabling significant advancements in sectors such as healthcare, finance, and transportation. As AI becomes more prevalent, this development raises critical questions regarding legal liability and moral responsibility. This article explores the complexities of assigning legal liability in contexts where AI makes consequential decisions, encompassing autonomous vehicles, medical diagnostics, algorithmic governance, and judicial decision-making. Notable cases that are Uber’s Tampa Pedestrian Fatality (2018), COMPAS Recidivism Algorithm, Sandra Sultzer vs. Intuitive Surgical (USA, 2021–2024). However, the existing Laws of AI in India are not properly reformed to cover the full spectrum of challenges. This article also covers the global perspective regarding AI laws in different countries, such as the UK, the USA, and INDIA. It also proposes the need to improve AI transparency and to maintain its ethical conduct.

INTRODUCTION

The rapid growth of Artificial Intelligence in the 21st century and its incorporation in several industrial sectors, including healthcare, finance, and transportation, play an important role in their development. While these technologies offer unprecedented efficiency and innovation, they also raise a pressing question: Who is responsible when AI makes a mistake? This challenge lies in the fact that traditional law frameworks are not capable of covering the intricacies of AI. In the traditional law theory of negligence, product liability, and criminal responsibility presume that a human made the decision that caused harm. However, in AI systems particularly have the ability to quickly learn and accurately analyse large amount of data. its causation is not always easy to trace, which makes it more difficult to find the culprit. This has created a legal vacuum where victims of AI-related harm may struggle to find justice, and developers, operators, or institutions may not clearly understand their obligations or liabilities. In the context of India, these challenges are more profound. India is investing more in AI-driven industrial solutions. But its legal and regulatory systems have not kept pace. Existing statutes, such as the IT Act 2008, did not directly address the complexity of AI decision-making. This gap raises significant concerns, particularly in sectors like healthcare, where errors can result in death, or finance, where algorithmic bias can lead to systemic exclusion or fraud.

WHAT IS AN AI?

Artificial intelligence (AI) refers to the computer systems that are capable of learning or mimicking human intelligence in terms of both thoughts and behaviour. AI is composed of a complex database and, neural network system consisting of software–generated algorithms. It is an umbrella term that encloses a variety of methods and technologies that give computers the ability to mimic human intellect. Analytical processes, such as learning, thinking, language comprehension, and decision making, are part of the category.

CATEGORIES OF AI

  • Machine learning (ML)- Machine learning is the part of AI that concerns the creation of algorithms that computers can use to understand and act upon it. There are broadly three types of ML those are supervised, unsupervised, and reinforcement learning.
  • Neural network – it is a type of ML algorithm that is designed to simulate the structure and function of the human brain. It is composed layers of interconnected nodes, or neurons, that process information and make predictions.
  • Natural language processing– It is a branch of AI that focuses on the interaction between the computer and human. It has specific algorithms that analyse and interpret human language and text, allowing machines to understand and respond to it.[1]

NAVIGATING LEGAL LIABILITY AND ACCOUNTABILITY IN AI APPLICATIONS

AI applications cover a large array of industries. Each with distinctive implications for legal liability and accountability:

Healthcare- AI systems are used for diagnostics, treatment recommendations, and personalised medicine. It can identify complex diseases and also complex medical data, which can lead to more accurate diagnoses and treatment. Although AI also has drawbacks, like errors in the diagnostic reports, which can lead to inappropriate treatment and significant legal implications.

Finance- In finance, AI systems are used for algorithm trading, fraud detection, and credit scoring. It also helps in the conduct of trades at optimal times according to the market. These applications work on a large database and a complex algorithm to make decisions. Legal implications arise due to the biased decision-making and lack of transparency.

Transportation and automation – In transportation and automation vehicles, AI is employed as the prominent application to navigate and make driving decisions. As the vehicles are driven by AI, it becomes difficult to assign accountability when an accident occurs, as it is complex to determine whether liability lies with the manufacturer, software developer, or vehicle owner.[2]

LIABILITY IN THE CASE OF AN AUTONOMOUS VEHICLE

An autonomous vehicle, also known as a self-driving car, depends on the technology called AI to control its operations. These vehicles use a large amount of data, which is utilised by different combinations of sensors, actuators, complex algorithms, machine learning systems, and powerful processors to execute their software. These various sensors present in the cars determine its environment. Radar sensor monitors nearby vehicles’ position and distance, camera detects traffic lights, spot road signs, vehicles, and people. LIDAR uses light to measure distance, find road edges, and detect lane lines. Currently, the main concern is not that if India is ready for automatic cars, but whether Indian laws are equipped to handle the challenges, mainly the accountability of accidents.

In developed countries like the UK and the USA, their regulations for autonomous vehicles are more progressive, as the Automated and Electric Vehicle Act of the UK states that the owner of the autonomous vehicle will be held liable for accidents caused by the vehicle.

But in the USA, the regulations vary between the states; some of them place liability on the manufacturer when the vehicle is in autonomous mode. The fatal Uber accident in Arizona showed that human error played a significant role in it.

In India, there is currently no specified legislation or provision to regulate autonomous vehicles. Accidents from non-automated vehicles are governed by the Motor Vehicle Act and the Bhartiya Nyaya Sanhita. Which follows a “no fault” liability principle. However, applying this provision to accidents involving self-driven cars will raise questions about manufacturer’s responsibility.  Proposed changes to the Motor Vehicle Act, 2016, are still pending and aim to exempt certain vehicles to promote research and innovation. Under the Bhartiya Nyaya Sanhita, different laws are given stating provisions for rash driving, causing death by negligence, causing hurt, and causing grievous hurt. However, the current statutes do not cover self-driving cars.[3]

LEGAL STRUCTURE AND ARTIFICIAL INTELLIGENCE

CURRENT OVERVIEW

AI has made significant advancements in recent years by leading the rapid progress in various industries. The widespread use of AI in these industries needs a strong legal structure to regulate its utilization. Currently, there are substantial variations in the rules and regulations that regulate AI across the world. These legal principles are made to tackle the concerns regarding accountability and carelessness.

INDIA

India, being a major player in tech in comparison to the other developing countries of the world, still doesn’t have a complete law specifically for AI. However, existing laws do affect how AI is used.

The Information and Technology Act, 2000, deals with digital issues like data protection and cybersecurity, which are important for AI. The upcoming Personal Data Protection Bill, 2019, will also matter a lot—it sets rules for how data is collected, used, and stored, with a focus on user consent and privacy.[4]

Since there are no specific AI laws yet, India uses general legal rules from laws like the Indian Penal Code and the Consumer Protection Act, 2019. These cover things like negligence, responsibility, and duty of care.

For example, if an AI system causes harm, the makers could be held responsible under tort law. Negligence—failing to take reasonable care—is key in deciding who is at fault in AI-related problems

UNITED KINGDOM

The General Data Protection Regulation (GDPR) a regulatory framework that is mainly made to address artificial intelligence within the European Union incorporates provisions for automated decision-making and profiling. These provisions mandate that people or the users of the AI should be notified of the decisions made by it. The European Union is constantly working on the regulatory structure of AI to strengthen the GDPR against the risks of AI.

USA

The framework in the United States is distinctive from other countries. AI is regulated through federal and state regulations that cover particular areas, including data privacy, discrimination, and autonomous cars. (FTC) The Federal Trade Commission has released recommendations regarding the use of artificial intelligence, which specifically focus on promoting openness, fairness, and accountability.[5]

NOTABLE CASES

CASE STUDY 1- Uber’s Tampa Pedestrian Fatality (2018)

In 2018, a self-driving Uber car hit and killed a pedestrian in Arizona. An investigation showed the safety driver was not paying attention, and the car’s system didn’t see the person in time. This accident showed problems with both the technology and human supervision. It led to stronger calls for rules to make testing self-driving cars safer. It also showed the need for clear laws, human oversight, and responsibility for both companies and drivers.[6]

CASE STUDY 2- COMPAS Recidivism Algorithm

The COMPAS tool is used in U.S. courts to predict if someone might reoffend. In 2016, a man in Wisconsin argued it was biased and affected his sentence unfairly. The court said the tool can be used, but shouldn’t be the only factor in sentencing. It also said people should be allowed to question the tool’s fairness. This case highlights the need for fairness, transparency, and human oversight in AI decisions.[7]

CASE STUDY 3- Sandra Sultzer vs. Intuitive Surgical (USA, 2021–2024).

In 2021, Sandra Sultzer underwent colon cancer surgery using the da Vinci surgical robot at Baptist Health Boca Raton Regional Hospital, Florida. According to a lawsuit filed by her husband, the robot, made by Intuitive Surgical, Inc., caused internal burns due to leaking electrical energy, leading to a perforated intestine, multiple complications, and ultimately her death in February 2022. The lawsuit claims the robot was poorly designed, lacked proper insulation, and that surgeons were not adequately trained. It also accuses the company of failing to warn users of the risks and pressuring hospitals to adopt the technology. Intuitive Surgical has faced several similar lawsuits and previous FDA recalls. This case raises serious concerns about the safety, oversight, and accountability in robotic-assisted surgeries.[8]

NEEDS TO IMPROVE

  • Who Is Responsible When AI Causes Harm? When AI systems make mistakes or cause harm, it’s important to know who is responsible. Laws can help make it clear who should be held accountable and how people can take legal action if something goes wrong.
  • Protecting Privacy and Personal Data: AI needs a lot of data to work well. Laws like the GDPR make sure that AI systems handle personal data safely and openly. This protects people’s privacy and gives them control over their information.
  • Making AI Decisions Clear: Sometimes AI systems make decisions that are hard to understand. Legal rules can require companies to explain how their AI works and why it made a certain decision. This helps people understand and challenge those decisions if needed.
  • Stopping Unfair Bias: AI can accidentally treat people unfairly because of biased data. Laws can help make sure AI systems are fair and don’t discriminate based on things like race, gender, or age.
  • Keeping AI Safe and Secure: AI used in areas like healthcare or self-driving cars must be safe. Legal rules can set safety standards to prevent harm and make sure these technologies work properly.[9]

  

REFERNCES

[1] Priyadarshi Nagda, ‘legal liability and accountability in AI decision making’ (IJIRT International Journal of innovative research in technology Volume -11, Issue 2025) https://ijirt.org/publishedpaper/IJIRT174899_PAPER.pdf accessed 11 June 2025.

2.Priyadarshi Nagda, ‘legal liability and accountability in AI decision making’ (IJIRT International Journal of innovative research in technology Volume -11, Issue 2025) https://ijirt.org/publishedpaper/IJIRT174899_PAPER.pdf

[3] Akshay Soni and Mandvi Khangarot, ‘Autonomous Vehicles: Legislations for Liabilities’ (Legal Service India, 2022) https://www.legalserviceindia.com/legal/article-10606-autonomous-vehicles-legislations-for-liabilities.html

[4] What regulations does India have to protect citizens’ digital rights amid largescale AI deployment? (ET Government, 9 October 2024) https://government.economictimes.indiatimes.com/news/governance/what-regulations-does-india-have-to-protect-citizens-digital-rights-amid-large-scale-ai-deployment/111481042

[5] Gasser, U., & O’Brien, D. R. (2017). “Building A Global Legal Framework for AI.” New York Times.

[6] BBC News, ‘Uber’s selfdriving operator charged over fatal crash’ (16 September 2020) BBC News https://www.bbc.com/news/technology-54175359

[7] Priyadarshi Nagda, ‘legal liability and accountability in AI decision making’ (IJIRT International Journal of innovative research in technology Volume -11, Issue 2025) https://ijirt.org/publishedpaper/IJIRT174899_PAPER.pdf

[8] People, ‘“Unreasonably Dangerous” Surgical Robot Fatally Burned Cancer Patient, Lawsuit Alleges’ (People, 9 February 2024) https://people.com/da-vinci-surgical-robot-lawsuit-florida-woman-burned-8575129

[9] Analytics Insight, ‘Is Artificial Intelligence Bound by a Legal Framework Too?’ Analytics Insight (4 June 2023) https://www.analyticsinsight.net/is-artificial-intelligence-bound-by-a-legal-framework-too/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top