AI and Legal Liability: Challenges of accountability in autonomous decision-making

Published on: 29th March, 2026

Authored by: Shantanu Dipak Jagtap
Ismail Saheb Mulla Law College, Satara

Abstract

Artificial Intelligence (AI) has emerged as a transformative technology with profound implications for legal liability frameworks, particularly in jurisdictions like India where statutory law has yet to fully adapt to autonomous decision-making. This article examines the conceptual and doctrinal challenges in attributing responsibility when AI systems act without direct human control, producing outcomes that are often probabilistic, opaque, and transnational in nature. Drawing upon comparative perspectives from the European Union, United States, and select Asian jurisdictions, the paper analyses how existing Indian legal instruments—principally the Information Technology Act 2000, the Consumer Protection Act 2019, and sector-specific regulations—fall short in addressing AI-specific harms. Key areas of concern include evidentiary difficulties in proving causation, jurisdictional fragmentation in cross-border enforcement, and institutional capacity gaps within both regulatory bodies and the judiciary. The article argues for an integrated approach that links accountability and enforcement, proposing a hybrid liability regime for high-risk AI uses, coupled with compulsory insurance schemes and robust pre-market certification processes. In doing so, it underscores that legal reform alone is insufficient; without parallel investment in technical expertise, forensic capabilities, and international cooperation, AI liability provisions risk becoming ineffectual. By situating India’s regulatory challenges within a global comparative framework, the article offers a roadmap for building a resilient AI accountability system capable of protecting rights while enabling innovation.

1. Introduction

The fast development of artificial intelligence (AI) has gone past just using algorithms to fields where robots have a lot of independence. In India, AI-driven systems are now used for things like healthcare diagnosis, making financial decisions, predicting crime, and even making recommendations for bail.[1] While this growth offers scalability and efficiency, it also presents a basic moral quandary: who ought to be held liable if an artificial intelligence system does harm?

The issue is actual rather than hypothetical. Real-world events, like the deadly accident in Arizona in 2018 involving an Uber self-driving car, show the real hazards of letting machines make decisions. The use of facial recognition technology by the National Crime Records Bureau in India and AI-based credit scoring tools by fintech companies reveals that similar accountability issues are not far from hand.

Under conventional legal systems, a natural or legal entity is responsible for damage caused by its acts or omissions that can be linked to the damage using established ideas like negligence, vicarious liability, or strict liability.[2] Autonomous artificial intelligence (AI) systems, on the other hand, work with a level of unpredictability and darkness that makes it difficult to assign blame.

This article examines the “accountability gap” that emerges from such autonomy. It analyses the adequacy of Indian statutory provisions—Information Technology Act 2000, Consumer Protection Act 2019, and Digital Personal Data Protection Act 2023—and identifies their shortcomings in regulating harm caused by autonomous AI. Comparative insights from the European Union, United States, and China will illustrate possible reform trajectories for India.[3]

2. Understanding Autonomous Decision-Making in AI

Definition and Nature
Autonomous decision-making in AI refers to the ability of a system to execute tasks, make judgments, and adapt to new situations without direct human intervention at the point of operation. While the initial design and programming are human-driven, the decision-making process—especially in machine learning and deep learning systems—evolves dynamically, often in ways not explicitly foreseen by the developers.[4]

Assistive vs. Autonomous AI
One must differentiate between autonomous and assistive artificial intelligence:

  • AI helps people make decisions, but it needs to be checked by a person. Example: AI tools like Grammarly or legal research software such as SCC Online’s AI search feature.
  • Autonomous artificial intelligence has the ability to make decisions and act on them free from real-time human supervision. Example: AI systems that automatically approve or reject loan applications; AI-based surveillance that independently issues e-challans; or AI-driven medical imaging devices that directly suggest treatments.

Indian Use Cases

  • Finance: Indian NBFCs and fintech start-ups are more and more using AI credit scoring without human underwriting, which is causing disagreements about wrong loan denials.
  • Healthcare: AI diagnostic platforms—like those in telemedicine—are being trialled to provide treatment recommendations. A misdiagnosis here calls into question medical carelessness responsibility.
  • Law Enforcement: Predictive policing models and facial recognition technologies like those used in Delhi for crowd control run mostly without human validation.
  • Traffic Management: Automated ANPR (Automatic Number Plate Recognition) systems directly trigger fines without officer verification.

Such applications demonstrate AI’s growing independence in areas directly affecting fundamental rights and livelihoods, making the question of accountability both urgent and complex.

3. The accountability gap

Conventional Indian liability systems—whether grounded in contract, tort, consumer law, or criminal justice—assume a human or corporate person to be the decision-maker. AI questions this premise in three main ways:

(i) Opacity and the Problem of the Black Box

Advanced artificial intelligence systems, especially deep neural networks, work in ways that even their creators may be unable to fully comprehend.[5] In legal cases, this ambiguity makes it quite challenging to show causality or show breach of responsibility. For example, if an investment algorithm run by artificial intelligence causes huge losses to customers, it might be difficult to find out why it made that decision unless you had technical forensics, which courts aren’t able to do.

(ii) Multiplicity of Actors
Several parties are involved in an artificial intelligence’s operation: programmers, data providers, algorithm trainers, system deployers, and consumers. A mix of these factors might cause harm. In a self-driving car accident, for instance, the fault might be with the sensor maker, the software programmer, or the business that didn’t keep the algorithm up to date, which makes liability unclear and debatable.[6]

(iii) Criminal responsibility without mens rea.

Under the Indian Penal Code 1860 (soon to be replaced by the Bharatiya Nyaya Sanhita 2023), Indian criminal law usually demands a mental component—that of intent, knowledge, or recklessness. Since artificial intelligence systems lack mental states, attributing criminal intent is ethically dubious. Proving foreseeability of harm—a very high standard in autonomous learning environments—is necessary to hold the human developer legally responsible.

AI essentially lets developers or deployers avoid responsibility and leaves victims uncompensated while causing damage beyond the scope of current liability laws.

4. Limitations and the Present Indian Legal Framework

India presently uses a patchwork of broad legislation to handle AI-related damage; none of them were written with autonomous decision-making in view.

(a) 2000 Information Technology Act (IT Act)

Targeted mostly at cybercrime and electronic commerce, the IT Act sets intermediary liability (s. 79) and punishments for illegal access, data breaches, and hacking. Although in some cases an AI service provider may be considered as a “intermediary[7],” the Act does not consider decision-making independence or non-human fault attribution. Moreover, s.79 safe harbour provisions let intermediaries escape liability if they can demonstrate lack of knowledge or control—easier for AI deployers to claim in the case of autonomous systems.

(b) 2019 Consumer Protection Act (CPA)

The CPA offers remedies for subpar services and flawed products. An artificial intelligence’s bad choice—for instance, incorrect refusal of an insurance claim—could be contested as a service shortage. The CPA framework, on the other hand, assumes a human or corporate actor personally delivering the service and has trouble with situations in which harm follows a self-learning algorithm’s post-deployment changing behaviour.

(c) 2023 Digital Personal Data Protection Act (DPDP Act)

The DPDP Act governs personal data processing, including automated decision-making, and mandates data fiduciaries to put precautions in place. But its main concern is privacy and permission, not on more general damages such as physical injury, financial loss, or discrimination resulting from AI outputs.

(d) Common Law Tort Principles

Strict liability guidelines from Indian tort law could theoretically apply to AI-related damage. But proving negligence calls for establishing a breach of a duty of care, foreseeability, and causation—factors the opacity of artificial intelligence systems negates. Rylands v Fletcher’s strict liability rule[8] is unsuitable in this instance as it pertains to inherently hazardous “things,” not independent decision-making processes.

5. Comparative Global Approaches

Different legal traditions, policy priorities, and socioeconomic circumstances have led to very different legal responses across countries in the era of autonomous decision-making, each reflecting its own set of challenges in assigning legal liability. India, which has not yet created a full AI liability structure, can find both warnings and possible templates in a thorough study of these international models.

The European Union has been the most proactive and deliberate. The Artificial Intelligence Act (AIA) is based on a risk-based taxonomy and is now in the last phases of legislative approval.[9] AI systems are divided into minimal risk, high risk (e.g., medical devices, biometric identification), and unacceptable risk categories (e.g., social scoring), with related compliance needs. Crucially, the EU intends to extend its Product Liability Directive to include harm resulting from artificial intelligence (AI) systems, including a rebuttable presumption of causation in the event that particular evidentiary criteria are fulfilled.[10] This basically moves the responsibility of proof from victims, hence resolving the complexity and opacity of artificial intelligence systems—problems rather important in India where technological awareness is still low.

By contrast, the United States avoids a centralised artificial intelligence legislation in favour of a segmented, industry-based approach. Agencies like the Food and Drug Administration (FDA) for AI-driven medical devices[11] and the Federal Trade Commission (FTC) for consumer protection control AI within current legal systems. Liability is mostly decided using tort principles—negligence, product liability, and breach of warranty—with courts progressively adapting doctrines to AI circumstances.[12] Although this decentralised approach offers flexibility and encourages innovation, it also creates inconsistencies and leaves holes in cross-sectoral damages—problems India is already running into in law enforcement and financial technology AI deployments.

China presents still another paradigm: centralised, prescriptive, and very enforceable. Regulatory tools such the Provisions on the Administration of Algorithmic Recommendation Services (2022) and the Measures for the Management of Generative Artificial Intelligence Services (2023) require mandatory registration of algorithms, impose content limits, and clearly hold service providers accountable for negative outputs. Although some actions are motivated by political necessities rather than only legal considerations, China’s openness in assigning blame—to developers, deployers, or both—is instructive for nations like India, where the “black box” defence frequently inhibits enforcement.

All told, the EU gives predictability and consumer protection top priority; the US stresses sectoral flexibility and judicial adaptation; and China guarantees unambiguous attribution supported by governmental authority. India would probably be best with a hybrid approach combining the EU’s burden-shifting clauses, the US’s adaptable sectorial oversight, and China’s clear responsibility distribution. Such a synthesis would not only clarify legislation but also support India’s twin goals of encouraging digital innovation and protecting citizen rights.

6. Suggested Indian Legal System Revisions

India has a structural vacuum for which urgency needs to be addressed from the lack of a specific legal framework for AI liability. Any change has to start with the awareness that independent artificial intelligence systems live in a world where human intention and machine action are becoming more and more separated. This disconnect calls into question the validity of conventional negligence-based liability criteria, which rely on a clear human act or omission. Therefore, it is imperative to switch to a strict liability system[13] for high-risk AI applications so that victims are eligible for compensation without having to prove fault, therefore reducing suffering for people.

But liability by itself is not enough to be the only way to regulate things. For artificial intelligence systems with the ability to impact constitutionally protected rights, especially under Articles 14 and 21 of the Constitution[14], the reform plan also needs include required human supervision.[15] This is especially important in fields like predictive policing, credit scoring, and medical diagnostics, where unclear algorithms can bring in systematic bias or process injustice. Such oversight should be formalised via legislative provisions for “meaningful human control,” therefore conforming with international best practices as shown by the European Union’s Artificial Intelligence Act.[16]

Another structural changes would be the creation of a specialised AI Regulatory Authority. Such an organisation could be given the authority to certify, audit, and monitor AI systems both before and after implementation. Certification would serve as a preventive tool, akin to pre-market clearances in pharmaceutical regulation; continuous audits would serve as a corrective mechanism for emerging hazards. Managing a central database of certified AI models can help the government to coordinate between agencies and provide public transparency.

A compulsory AI liability insurance system should finally be implemented to operationalise liability in a way that does not discourage innovation. This would guarantee that victims get prompt compensation, therefore motivating developers and deployers to embrace greater ethical and safety criteria. Such a strategy combines justice with innovation, hence grounding India’s artificial intelligence future in both legal framework and economic development.

7. Obstacles to Accountability and Enforcement in AI Liability

AI liability discussion centers on the issue of “who is to blame” when an autonomous system causes damage, but in India it relates to a bigger and more systemic issue: enforcement. As a theoretical concept, accountability assumes a clearly recognizable defendant and a legal standard against which their behavior may be judged. In the world of artificial intelligence, both of these components are thrown into disarray.

From an accountability standpoint, artificial intelligence generates situations when damage results without human intervention directly: an autonomous car chooses to swerve, a diagnostic AI misclassifies a tumor, or an algorithmic trading bot sets off a market anomaly. In such instances, traditional theories like product liability or vicarious liability struggle to fit the circumstances. The layered character of artificial intelligence ecosystems makes the difficulty worse: one choice could represent the work of software engineers, dataset curators, system integrators, and end-users, sometimes across several countries. Thus, deciding as a society where liability should rest when human control is reduced is not a question of simple forensic study but rather one of normative allocation—that is, of deciding which actor bears main responsibility.

Enforcement subsequently adds to this difficulty. Indian authorities like the Ministry of Electronics and Information Technology (MeitY) and adjudicatory bodies under the IT Act now lack both the technical competence and the legal authority to properly probe AI-specific risks. Cross-border factors like cloud-based artificial intelligence housed in other countries interfere with the service of process, the acquisition of digital data, and the enforcement of judgments even when liability is theoretically proven.

Globally, governments have tackled this by means of hybrid approaches including mandatory insurance programs along with severe responsibility for particular high-risk AI applications, as is evident in some areas of the EU. Such systems guarantee restitution even if fault is difficult to show and lower the evidentiary load on victims. India could adopt a comparable approach, but enforcement would call for a fundamental enhancement of technical competence inside regulatory agencies, the founding of specialised AI tribunals, and the incorporation of digital forensics skills into investigation units.

The bigger issue is that AI liability enforcement has to be anticipatory rather than just reactive. Liability systems will remain empty promises without thorough pre-market testing, certification criteria, and real-time tracking of installed systems. Therefore, responsibility and enforcement are co-dependent rather than consecutive steps in AI regulation. One cannot hold an actor responsible without the tools to probe and establish responsibility; furthermore, enforcement without defined guidelines of accountability runs the danger of turning into arbitrary punishment.

8. Conclusion

Indian law has to deal with a liability vacuum it is not ready to fill because autonomous decision-making is getting more popular. Doctrines created for human fault have problems when a system’s logic is unclear and dispersed over many players and countries causes damage. As this essay has argued, accountability and enforcement are intertwined; a liability rule devoid of inquiry ability is worthless; enforcement without explicit criteria runs the danger of subjectivity.

Strict liability for high-risk artificial intelligence applications, required insurance to ensure victim compensation, and pre-market certification to scan dangerous systems before implementation India need use a hybrid strategy. Similarly, implementing these reforms depends on investments in international collaboration, technical knowledge, and judicial training as well as in equipment.

India’s legal system’s legitimacy will progressively depend on its capacity to respond to a basic but crucial question: when machines act, who replies? Without definitive action, artificial intelligence will not only blur the lines of culpability but also undermine public trust in law itself.

Footnotes:

[1] NITI Aayog, National Strategy for Artificial Intelligence #AIforAll (Government of India, 2018) 5–9

[2] Donoghue v Stevenson [1932] AC 562 (HL).

[3] ] European Commission, Proposal for a Regulation on Artificial Intelligence (Artificial Intelligence Act) COM (2021) 206 final.

[4] Mittelstadt, B., Russell, C., & Wachter, S., “Explaining Explanations in AI,” Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19), pp. 279–288, 2019.

[5] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1

[6] National Transportation Safety Board, Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian (Report No NTSB/HAR-19/03, November 2019)

[7] Information Technology Act 2000, s 79; Shreya Singhal v Union of India (2015) 5 SCC 1

[8] Rylands v Fletcher (1868) LR 3 HL 330

[9] European Commission, Proposal for a Regulation on Artificial Intelligence COM (2021) 206 final.

[10] Proposal for a Directive of the European Parliament and of the Council on Liability for Defective Products COM (2022) 495 final.

[11] US Food and Drug Administration, Artificial Intelligence and Machine Learning in Software as a Medical Device (FDA, 2023) https://www.fda.gov accessed 10 August 2025.

[12] Restatement (Second) of Torts (American Law Institute 1965).

[13] Rylands v Fletcher (1868) LR 3 HL 330.

[14] Constitution of India 1950, article 14, 21.

[15] Justice K S Puttaswamy (Retd) v Union of India (2017) 10 SCC 1.

[16] European Parliament, Artificial Intelligence Act (n 1).

 

Bibliography

Statutes & Legislative Materials

  1. Indian Penal Code, 1860, Act No. 45 of 1860 (India).
  2. Bharatiya Nyaya Sanhita, 2023, Act No. 45 of 2023 (India), The Gazette of India, Extraordinary, Part II, Section 1.
  3. Information Technology Act, 2000, Act No. 21 of 2000 (India).
  4. Consumer Protection Act, 2019, Act No. 35 of 2019 (India).
  5. Digital Personal Data Protection Act, 2023, Act No. 22 of 2023 (India).
  6. European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.

Case Laws
7. Sherras v. De Rutzen [1895] 1 QB 918 (DC) (UK).
8. Rylands v. Fletcher (1868) LR 3 HL 330 (UK).
9. M.C. Mehta v. Union of India (Oleum Gas Leak Case), (1987) 1 SCC 395 (India).

Reports & Policy Documents
10. NITI Aayog, National Strategy for Artificial Intelligence (2018), Government of India.
11. Ministry of Electronics & Information Technology, Responsible AI for All (2021), Government of India.
12. UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).

Books & Journal Articles
13. Andrew D. Selbst & Solon Barocas, “The Intuitive Appeal of Explainable Machines” (2018) 87 Fordham Law Review 1085.
14. Roger Brownsword, “Law, Technology and Society: Reimagining the Regulatory Environment” (2019) Oxford University Press.
15. Vidushi Marda, “Artificial Intelligence Policy in India: A Framework for Ethical AI” (2020) Carnegie India.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top