AI and Legal Liability: Challenges of accountability in autonomous decision-making

Published On: October 7th 2025

Authored By: Tanuj Kumar
Aligarh Muslim University

ABSTRACT

“Artificial Intelligence (AI) has rapidly evolved from a futuristic concept to a practical reality influencing decision-making across industries in India. Its integration into governance, commerce, finance, and even the justice system presents unprecedented opportunities but equally complex challenges in terms of legal liability. Autonomous decision-making systems often operate without direct human input, creating ambiguity in identifying responsibility when harm or loss occurs.[1] This article examines the challenges of assigning liability for AI-driven decisions within the Indian legal framework, analyses constitutional and statutory provisions, explores judicial approaches, and identifies gaps in existing laws. The study also engages with doctrinal debates over personhood, agency, and fault attribution in the AI context, proposing legal reforms aimed at ensuring accountability without stifling innovation.”

INTRODUCTION

“The rise of AI-based autonomous decision-making systems marks a paradigm shift in how decisions are made and executed in India. From predictive policing tools to algorithmic lending decisions, AI is increasingly replacing human judgment in areas once deemed the exclusive domain of human discretion. However, with this shift comes the challenge of determining legal liability when such systems cause harm or infringe upon rights. Traditional legal frameworks were built around human actors, corporations, and identifiable agents. Autonomous systems—especially those using machine learning—complicate this model because their decisions can be opaque, adaptive, and unpredictable.”

In India, the intersection of AI and law is still in its nascent stage. While the Information Technology Act, 2000 (IT Act) governs aspects of digital operations, it does not directly address autonomous decision-making liability. Similarly, tort law, contract law, and constitutional principles offer partial remedies but lack specific AI-oriented provisions. The Supreme Court has, in several cases, acknowledged the significance of technological evolution, yet a coherent judicial or legislative policy on AI liability remains absent.

This paper critically evaluates the challenges in attributing liability for AI decisions under Indian law, focusing on the doctrines of negligence, product liability, vicarious liability, and constitutional accountability.

THE NATURE OF AI AND AUTONOMOUS DECISION-MAKING

“AI refers to computational systems designed to perform tasks requiring human-like intelligence, including reasoning, learning, and problem-solving. In the Indian legal context, the complexity increases when AI systems become autonomous—capable of making decisions without human intervention.[2]

Key features that complicate liability include:

  • Opacity (Black-Box Problem): “AI algorithms, particularly deep learning models, can be so complex that even their creators cannot fully explain specific decision-making processes.[3]
  • Adaptivity: “AI systems can evolve based on data inputs, meaning their outputs may change unpredictably over time.”
  • Distributed Agency: “AI decisions often result from interactions between multiple algorithms and human operators, making it difficult to isolate fault.”

Under Indian law, these features clash with the traditional fault-based liability model that requires identifiable negligent or intentional conduct.

EXISTING LEGAL FRAMEWORK IN INDIA

  1. CONSTITUTIONAL PROVISIONS

“The Indian Constitution indirectly governs AI deployment through fundamental rights. AI decisions affecting life, liberty, and equality implicate Article 21 (right to life and personal liberty) and Article 14 (equality before the law). Algorithmic bias, for example, could violate Article 14 by enabling arbitrary or discriminatory outcomes. The Puttaswamy v Union of India judgment extended Article 21 to include informational privacy, which is directly relevant for AI systems processing personal data.”

  1. STATUTORY PROVISIONS

“The Information Technology Act, 2000[4] regulates electronic records, cybersecurity, and intermediaries, but contains no explicit provisions for AI liability. Section 79 provides “safe harbour” protection to intermediaries, but its application to AI service providers remains uncertain.”

“The Consumer Protection Act, 2019[5] could be invoked in cases of defective AI products or deficient services, placing liability on manufacturers or service providers. However, proving “defect” in an autonomous decision-making system is complex because the cause may lie in data bias, algorithmic errors, or unforeseen interactions.”

  1. TORT LAW

Under Indian tort law, negligence and product liability could be applied to AI harm. “In product liability, a manufacturer can be held responsible for defects that cause harm.” However, AI introduces the problem of determining whether the defect was in the initial programming, data training, or in post-deployment evolution of the system.

CHALLENGES IN ASSIGNING LEGAL LIABILITY

  1. ATTRIBUTION OF FAULT

“Traditional liability frameworks require a direct causal link between conduct and harm. With AI, identifying this link becomes challenging because the decision may be influenced by factors beyond human foresight. For instance, if a financial AI system wrongly rejects a loan application due to biased training data, who is liable—the developer, the data provider, or the deploying bank?”

  1. OPACITY AND EXPLAINABILITY

“The black-box nature of advanced AI makes it difficult to produce admissible evidence in court. Indian evidence law, under the Indian Evidence Act, 1872,[6] is ill-equipped to deal with algorithmic decision-making where even experts cannot fully explain the rationale.”

  1. VICARIOUS LIABILITY

The doctrine of vicarious liability holds employers responsible for acts of their employees. In AI, the “employee” is a machine, and courts must decide whether the deploying entity should be treated analogously.

  1. PRODUCT LIABILITY GAPS

Unlike in the European Union, Indian product liability law does not have a separate category for autonomous systems. This gap means courts must stretch existing categories, often leading to inconsistent outcomes.

JUDICIAL PERSPECTIVES IN INDIA

While there is no direct Indian precedent squarely addressing “AI liability,” courts have adjudicated on cases involving automation, technology-mediated harm, and intermediary liability, which can be analogically applied.

Shreya Singhal v Union of India[7] “dismantled the draconian Section 66A of the IT Act, 2000, holding that intermediaries cannot be made liable without “actual knowledge” of illegality in hosted content.” This principle could be extended to AI-deploying entities, suggesting that liability may arise only upon awareness of harmful AI behaviour, unless a strict liability regime is legislated.

In Justice K.S. Puttaswamy (Retd.) v Union of India,[8] “the Supreme Court recognised informational privacy as intrinsic to Article 21 of the Constitution. Since AI systems depend on large datasets, the case offers constitutional grounding to challenge AI-driven violations of privacy, especially in surveillance applications.”

Internet and Mobile Association of India v Reserve Bank of India[9] is notable for the Court’s insistence that even policy decisions involving complex technology must satisfy proportionality and reasonableness tests. This could guide judicial scrutiny of AI regulations and liability frameworks.

High Courts have also shown willingness to grapple with automation-related disputes. “In Anvar P.V. v P.K. Basheer,[10] the Supreme Court redefined admissibility of electronic evidence under the Evidence Act, a principle crucial for AI-related litigation where digital logs or algorithmic outputs may serve as key evidence.”

Together, these cases suggest that while Indian jurisprudence is not AI-specific, the constitutional lens of rights protection and statutory interpretation adaptability will form the backbone of judicial responses to AI liability until legislation catches up.

COMPARATIVE INSIGHTS FOR INDIAN LAWMAKERS

“Although India must tailor its approach to domestic conditions, examining comparative jurisdictions helps identify viable legal strategies.

  1. EUROPEAN UNION:

The EU’s proposed AI Liability Directive (2022)[11] introduces:

  • Fault-based liability for AI-related harm, with reversed burden of proof in certain cases.
  • Strict liability for operators of “high-risk AI” under the EU AI Act, including systems in critical infrastructure, healthcare, and policing. The EU model explicitly recognises the opacity challenge and compensates victims by easing evidentiary burdens.
  1. UNITED STATES:

The US has no federal AI liability statute. Liability is generally pursued under existing tort principles:

  • Product liability for defective AI products.[12]
  • Negligence for inadequate design, testing, or warning.
  • Contractual remedies for breach of service-level agreements.

However, the US model leaves victims to overcome high evidentiary thresholds, which may be ill-suited for India’s socio-economic context where access to expert witnesses is limited.

  1. SINGAPORE AND OECD GUIDANCE:

Singapore’s Model AI Governance Framework emphasises explainability, human-in-the-loop oversight, and risk categorisation.[13] The OECD’s AI Principles encourage accountability mechanisms but are non-binding. India could integrate these soft-law principles into a binding domestic statute.[14]

For India, the hybrid approach—adopting strict liability for high-risk AI, fault-based liability for low-risk systems, and mandatory transparency—would reconcile the need for innovation with constitutional rights protection.”

PROPOSED LEGAL REFORMS FOR INDIA

  1. AI-Specific Liability Statute: “A dedicated law defining AI entities, risk categories, and liability standards.”
  2. Mandatory Explainability Standards: “Requiring AI providers to ensure algorithms can be meaningfully explained in legal proceedings.”
  3. Strict Liability for High-Risk AI: “Similar to hazardous activities under Rylands v Fletcher, high-risk AI systems could attract strict liability.”
  4. Algorithmic Audit Requirements: “Mandatory audits to detect bias and ensure compliance with equality and privacy rights.”
  5. Insurance Schemes: “Requiring AI operators to maintain liability insurance to cover potential harms.”

CONCLUSION

“The rapid proliferation of Artificial Intelligence in India’s decision-making ecosystem—spanning healthcare, finance, transport, governance, and criminal justice—has redefined the scope of legal liability. Autonomous systems blur traditional fault lines between human and machine, creating gaps in the attribution of responsibility. Indian law, while equipped with principles under the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, 2023, and constitutional safeguards under Articles 14 and 21, is still reactive rather than anticipatory. Courts have begun acknowledging the need for algorithmic accountability, but jurisprudence remains nascent.

The absence of a dedicated AI liability framework risks leaving victims without effective remedies while shielding corporate actors behind layers of technical complexity. To bridge this gap, India must adopt a hybrid approach—embedding strict liability for high-risk AI applications, mandating algorithmic transparency, and establishing sector-specific oversight bodies.

Moreover, judicial training on AI, statutory recognition of autonomous decision-making, and harmonisation with emerging global AI norms are essential. The challenge is not merely technological, but constitutional—ensuring AI augments rather than undermines human dignity, equality, and due process. In essence, the accountability of autonomous systems in India must evolve from a theoretical debate into a codified, enforceable reality, ensuring justice keeps pace with innovation.”

REFERENCES

[1] Anay Mehrotra and Samriddh Sharma, ‘Addressing Product and Service Liability Concerns in Artificial Intelligence: An Indian Perspective’ Law School Policy Review (12 February 2025)

[2] Pravin Anand et al, ‘Developing AI within India’s Regulatory Framework’ Asia Business Law Journal (7 March 2025)

[3] ArXiv contributors, ‘The Conflict Between Explainable and Accountable Decision-Making Algorithms’ (2022)

[4] Information Technology Act 2000, s 79.

[5] Consumer Protection Act 2019, s 2(34).

[6] Indian Evidence Act 1872, s 65B.

[7] Shreya Singhal v Union of India (2015) 5 SCC 1.

[8] Justice K.S. Puttaswamy (Retd.) v Union of India (2017) 10 SCC 1.

[9] Internet and Mobile Association of India v Reserve Bank of India (2020) 10 SCC 274.

[10] Anvar P.V. v P.K. Basheer (2014) 10 SCC 473.

[11] European Commission, ‘Proposal for a Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence’ COM (2022) 496 final.

[12] Restatement (Third) of Torts: Products Liability (American Law Institute 1998).

[13] Infocomm Media Development Authority, ‘Model AI Governance Framework’ (Second Edition, 2020).

[14] OECD, ‘Recommendation of the Council on Artificial Intelligence’ OECD/LEGAL/0449 (2019).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top