Published On: February 2nd 2026
Authored By: Vaishnavi Ravindra Urmode
Marathwada Mitra Mandal's Shankarrao Chavan Law College, Pune
Abstract
Artificial intelligence now plays a central role in influencing decisions in healthcare, transportation, finance, and governance. Its autonomy introduces a significant departure from traditional legal assumptions that harmful outcomes can be traced to an identifiable human act. When an autonomous vehicle caused a fatality in Arizona during Uber’s testing programme, investigators struggled to determine whether liability rested with the software’s design, the supervising human operator, or the corporation deploying the technology[1] [1]. Such incidents expose the tension between established liability doctrines and systems capable of independent, data-driven decision-making. This article evaluates these tensions and examines how legal systems may evolve to address accountability in an age of machine autonomy.
Introduction
Artificial intelligence now plays a central role in influencing decisions in healthcare, transportation, finance, and governance. Its autonomy introduces a significant departure from traditional legal assumptions that harmful outcomes can be traced to an identifiable human act. When an autonomous vehicle caused a fatality in Arizona during Uber’s testing programme, investigators struggled to determine whether liability rested with the software’s design, the supervising human operator, or the corporation deploying the technology [1]. Such incidents highlight the limitations of doctrines built around foreseeability, control, and human intention.
Understanding Autonomous Decision-Making
Modern AI—especially deep learning—does not rely merely on human-programmed instructions. Instead, it generates internal representations and decision pathways that may be opaque even to its designers, creating what scholars describe as a “black box” effect[2][2] . Scholars argue that unpredictability inherent in complex AI systems undermines negligence as a regulatory mechanism [3].[3]
Core Challenges in Fixing Legal Liability
The Black Box Problem
Legal systems rely on a clear chain of causation. However, the opacity of machine-learning models makes reconstructing the reasoning process nearly impossible. Institutional inquiries such as the House of Lords Select Committee on AI have noted these evidentiary challenges[4].[4]
Unpredictability of Machine Learning Systems
Machine-learning systems may behave in ways entirely unforeseen by developers, destabilising foreseeability—a cornerstone of negligence liability [5].[5]
Fragmented Responsibility
AI involves multiple actors including data providers, model developers, manufacturers, and users. The OECD has observed that such fragmentation complicates accountability [6][6].
Absence of AI-Specific Legal Standards in India
India lacks binding AI regulations. NITI Aayog’s ethical recommendations remain non-enforceable[7] [7], and the Digital Personal Data Protection Act 2023 does not address algorithmic harms[8] [8].
Liability Models Currently Applied
- Product Liability
AI that evolves post-sale complicates the traditional definition of a “defect”. Tesla Autopilot litigation illustrates this ambiguity[9] [9].
- Negligence
Negligence requires foreseeability, but AI outcomes may not be foreseeable even with adherence to standard[10]s [10].
- Vicarious Liability
AI is not an employee, and scholars caution against expanding vicarious liability to autonomous systems [11].[11]
- Strict Liability
High-risk AI systems may justify strict liability, especially in contexts such as autonomous vehicles or medical AI [12].[12]
Comparative Jurisdictional Perspectives
European Union
The EU’s Artificial Intelligence Act establishes a risk-based regulatory structure[13] [13]. The accompanying AI Liability Directive reduces the evidentiary burden by allowing presumptions of causation in certain cases [14].[14]
United States
The US adopts a sectoral approach. Automated driving guidance published by the Department of Transportation offers safety frameworks but no unified liability rules[15] [15]. Tesla-related litigation underscores inconsistencies in judicial reasoning [16].[16]
India
India’s approach is fragmented. The IT Act 2000 continues to serve as a primary legal tool [17][17], while policy frameworks such as the IndiaAI Roadmap highlight ongoing gaps [18].[18]
Illustrative Case Discussions
The 2018 Uber autonomous vehicle fatality remains a pivotal event in shaping global discussions on AI liability[19] [19]. Litigation involving Tesla Autopilot further demonstrates the difficulty courts face when assigning blame to semi-autonomous systems[20] [20].
Ethical and Policy Considerations
Transparency and Explainability
Scholars highlight that explainability must be embedded into AI systems to ensure meaningful accountability[21] [21].
Bias and Fairness
Studies show AI systems may reflect biases, raising constitutional concerns[22] [22].
Human-in-the-Loop Safeguards
Ethical frameworks such as IEEE’s Ethically Aligned Design stress meaningful human oversigh[23]t [23].
Regulatory Sandboxes
Regulatory sandboxes allow experimentation under supervision, helping authorities understand AI risks [24][24].
The Way Forward: Legal Reforms for India
Proposals include statutory transparency duties, strict liability for high-risk AI, mandatory insurance, and the creation of a specialised AI regulatory authority. Insurance-backed models may support innovation while protecting victims[25] [25]. Legal scholars also propose establishing a specialised oversight body[26] [26].
Conclusion
AI autonomy disrupts liability law by complicating foreseeability, causation, and responsibility. Comparative models provide direction, but India must tailor accountability structures to its institutional realities.
References
[1] State of Arizona v Uber Technologies Inc (Arizona Superior Court, 2018)
[2] Lilian Edwards and Michael Veale, ‘Slave to the Algorithm?’ (2017) 16 Duke L & Tech Rev 18
[3] Jack M Balkin, ‘The Path of Robotics Law’ (2015) 6 Cal L Rev Circuit 45
[4] House of Lords Select Committee on AI, AI in the UK: Ready, Willing and Able? (HL Paper 100, 2018)
[5] Bryant Walker Smith, ‘Proximity-Driven Liability for Autonomous Vehicles’ (2014) 51 Harv J Legis 123
[6] OECD, Principles on Artificial Intelligence (2019)
[7] NITI Aayog, National Strategy for Artificial Intelligence (2018)
[8] Digital Personal Data Protection Act 2023
[9] ] Huang v Tesla Inc (ND Cal, 2019)
[10] Karen Yeung, ‘Algorithmic Accountability’ (2018) 15 Regulation & Governance 1
[11] Ryan Calo, Robotics, Automation and the Law (Harvard UP 2019)
[12] Patrick Lin, Ryan Jenkins and Keith Abney, ‘Autonomous Vehicles and Moral Responsibility’ (2017) 3 Ethics & Info Tech 69
[13] European Commission, Artificial Intelligence Act COM (2021) 206 final
[14] European Commission, AI Liability Directive COM (2022) 496 final
[15] US Department of Transportation, Automated Driving Systems 2.0: A Vision for Safety (2017)
[16] Estate of Banner v Tesla Inc (Florida District Court, 2021)
[17] Information Technology Act 2000
[18]] MeitY, IndiaAI Roadmap (2021)
[19] Arizona Police Department, ‘Report on the 2018 Tempe Autonomous Vehicle Fatality’ https://www.azdps.gov/ accessed 10 December 2025
[20] Huang v Tesla Inc (n 9)
[21] Edwards and Veale (n 2)
[22] European Union Agency for Fundamental Rights, Getting the Future Right: Artificial Intelligence and Fundamental Rights (2020)
[23] IEEE Global Initiative, Ethically Aligned Design (2019)
[24] World Economic Forum, AI Governance Framework (2021)
[25] John Kingston, Artificial Intelligence and Legal Liability (OUP 2020
[26] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020)




