Published on 22nd July 2025
Authored By: Vidushi Rastogi
MIT World Peace University
Introduction
The advent of Artificial Intelligence (AI) has catalysed a paradigm shift across multiple sectors including healthcare, finance, transportation, and even law itself. From self-driving cars to predictive policing and generative AI tools, AI systems now routinely perform tasks that were once the exclusive domain of humans. However, with this technological evolution comes a significant legal challenge: how should the law attribute liability when AI causes harm or acts unpredictably?
Traditional legal doctrines rooted in human agency, foreseeability, and mens rea—find themselves increasingly inadequate in addressing the complex question of accountability in the context of autonomous systems. As AI grows more independent in its decision-making processes, particularly through machine learning and deep learning models, assigning responsibility becomes an ambiguous legal task. This article explores the inadequacies of current liability frameworks, assesses proposed legal models, and analyses global approaches to AI regulation, particularly through the lens of emerging Indian legal discourse.
Legal Liability: Classical Doctrines and Human Agency
At its core, legal liability refers to the responsibility of an individual or entity to answer for a breach of legal duty, whether civil or criminal. In tort law, liability usually arises from negligence, nuisance, or intentional harm, while in criminal law, liability is grounded in the presence of mens rea (criminal intent) and actus reus (criminal act). In contract law, breach of duty arises from a failure to meet stipulated obligations.
These traditional frameworks presume a human actor capable of intention, negligence, or duty. When a surgeon makes an error during surgery, when a manufacturer releases a defective product, or when a person defames another online—liability can be traced back to the individual or a corporate body through human agency. But AI challenges this very foundation. Who is to be held liable when a self-driving car swerves and causes an accident? When an AI-based hiring algorithm discriminates based on race? Or when a generative AI creates defamatory or misleading content?
Challenges AI Poses to Established Liability Structures
The legal challenge stems from three primary features of AI:
- Autonomy: Unlike conventional software, AI systems—especially those based on machine learning—can act independently after training, often producing outcomes unforeseen even by their developers.
- Opacity (the “Black Box” Problem): Many AI algorithms, particularly deep neural networks, operate as opaque systems, making it difficult to ascertain the precise decision-making process that led to a specific output or harm.[1]
- Adaptability: AI systems learn from new data and evolve their behaviour over time, introducing unpredictability and diverging from their original programming.
These attributes complicate the assignment of liability. Should the developer be held accountable for programming flaws? The user for misapplication? The manufacturer for providing insufficient safeguards? Or can liability be attributed to the AI itself—a non-human entity with no legal personhood?
Categorising AI Liability Risks by System Type
AI is not a monolith. Different systems pose different levels of risk:
- Narrow AI (e.g., recommendation algorithms, chatbots): These operate within a fixed set of tasks and carry relatively low liability risks. Still, they may cause reputational or financial harm if misused.
- Autonomous Vehicles: These present high-stakes scenarios. A self-driving car must make real-time decisions involving human safety, raising questions of liability in the event of traffic accidents or death.
- Generative AI (e.g., GPT models, image generators): These tools can produce text, code, and visuals that may infringe copyright, propagate misinformation, or even generate defamatory statements, leading to potential liability for creators and deployers.
Understanding the nature and purpose of each system is critical in tailoring an appropriate liability framework.
Current Legal Frameworks: India and Global Developments
1. India’s Legal Position
India currently lacks a dedicated legal framework to govern AI and its liabilities. The Information Technology Act, 2000[2] (as amended) is the principal legislation for digital activities, but it does not expressly contemplate AI-specific liability. Sections dealing with intermediaries, data protection (as later amended by the Digital Personal Data Protection Act, 2023), or cybercrime touch upon some issues but fall short of addressing autonomous decision-making.
Moreover, the Indian Penal Code, 1860[3], and civil liability statutes presume human culpability, thereby making them ill-suited for addressing harms caused by non-human actors like AI. The absence of case law also adds to the uncertainty surrounding AI liability.
That said, NITI Aayog’s Discussion Paper on Responsible AI (2021) [4]and the MEITY’s Draft National Strategy on AI indicate governmental interest in regulating AI but stop short of proposing a liability framework.[5]
2. European Union: The AI Act 2024
The EU AI Act, formally adopted in 2024, is the world’s first comprehensive legal framework regulating artificial intelligence. It classifies AI applications into four risk categories: unacceptable, high-risk, limited-risk, and minimal-risk. For high-risk systems, strict requirements are imposed, including transparency, human oversight, and accountability.
While the Act does not explicitly assign liability in all cases, it complements the Product Liability Directive and a forthcoming AI Liability Directive, which collectively aim to streamline liability rules. The proposed approach incorporates both strict liability and fault-based liability, depending on the system and the harm caused.[6]
3. United States: A Sectoral and Case-Law Driven Approach
The U.S. follows a sectoral and decentralised model, where liability is determined largely through tort litigation and guidance from agencies like the Federal Trade Commission (FTC). For instance, in United States v. Princeton Review (2017), the FTC held companies accountable for deceptive algorithmic practices.
Additionally, recent discussions have explored whether Section 230 of the Communications Decency Act (which grants immunity to platforms for third-party content) should apply to AI-generated content. Courts remain divided on this issue.
Emerging Legal Models of AI Liability
As legislatures and jurists grapple with AI’s legal challenges, several liability models have emerged:
- Product Liability Analogy
One approach is to treat AI as a product. Under this model, the manufacturer or developer can be held liable for defects in design, development, or instructions. This mirrors traditional consumer protection law. However, the autonomous and adaptive nature of AI complicates this analogy, particularly for systems that evolve post-deployment.
- Vicarious Liability
This model draws from employment law, where employers are held liable for the actions of their employees. Here, an AI could be treated as an agent, with the principal (developer or user) bearing responsibility. However, this presumes a high level of control, which is often absent in autonomous AI systems.
- Strict Liability
For ultra-hazardous AI applications, such as autonomous weapons or vehicles, some scholars suggest a no-fault liability regime, where the operator or owner is held liable regardless of negligence. This promotes accountability while reducing litigation complexity.
- AI as an Electronic Person
In 2017, the European Parliament controversially proposed the creation of a “legal status” for AI systems as “electronic persons” responsible for making good any damage they cause. While largely symbolic and not adopted into law, the idea has spurred global debate. Critics argue that granting legal personhood without legal responsibility or consciousness is problematic and ethically flawed.
Judicial Developments and Case Law
While jurisprudence is still nascent, several international incidents highlight the gravity of unresolved liability:
- Tesla Autopilot Crashes (USA): Multiple accidents involving Tesla’s autonomous driving feature have led to lawsuits against the company for negligence and failure to warn users. These cases illustrate the blurred line between human error and algorithmic misjudgement[7].
- Facial Recognition Misuse: In 2020, Robert Williams, a Black man, was wrongfully arrested in Detroit due to a facial recognition error. The ACLU sued the Detroit Police Department, claiming that negligent reliance on unverified AI tools violated constitutional rights.[8]
- Compas Algorithm Case: In State v. Loomis (Wisconsin, 2016), a defendant challenged the use of the COMPAS algorithm in sentencing, alleging due process violations due to the opaque and racially biased nature of the algorithm. The court upheld its use, sparking global debate.[9]
The Need for a Dedicated AI Liability Framework in India
India’s AI ecosystem is rapidly expanding, with public and private sector applications in finance, health, education, agriculture, and policing. However, the absence of a tailored legal framework creates significant risks of unaccountable AI deployment. The current reliance on the Information Technology Act, 2000, and traditional tort and criminal law is grossly insufficient to address the unique legal issues posed by autonomous systems.
A comprehensive AI liability framework in India should include the following elements:
- Risk-Based Categorisation of AI Systems
Like the EU’s model, India could adopt a tiered framework distinguishing between minimal, limited, and high-risk AI systems. This allows focused regulation without overburdening innovation.
- Mandatory Transparency and Explainability
Legal obligations should require AI developers to build systems capable of being audited and explained, especially for decisions affecting rights (e.g., denial of loans, arrests).
- Assignment of Vicarious or Strict Liability
For systems causing physical, financial, or reputational harm, strict liability should apply regardless of the developer’s or user’s intent. Liability could be shared among developers, deployers, and data providers based on their roles.
- Mandatory Insurance Mechanisms
Similar to automobile insurance, high-risk AI applications could be subject to mandatory insurance schemes to ensure compensation for victims.
- Regulatory Sandboxes and Testing Environments
The government can establish regulatory sandboxes to test AI products in controlled environments, allowing real-world experimentation without compromising public safety.
- Independent Oversight Authority
An autonomous AI Regulatory Authority should be constituted to oversee standards, investigate harms, and advise courts on the complexities of AI-related disputes.
India’s Digital India Act (forthcoming) may incorporate some of these suggestions, but the urgency of addressing AI liability requires proactive legislative action.
Conclusion
Artificial Intelligence represents both a technological marvel and a legal conundrum. As AI systems increasingly make independent decisions affecting human lives, the law must evolve to assign accountability in a fair, predictable, and technologically literate manner. Existing legal doctrines grounded in human intent and foreseeability are inadequate to address the opacity, autonomy, and complexity of AI decision-making.
Global legal systems have begun to respond—some, like the EU, through formal regulation; others, like the U.S., via judicial innovation. India stands at a crossroads. As it races toward AI integration across governance and industry, it must also build a robust and anticipatory legal framework to prevent harm, ensure justice, and foster public trust.
The time has come to embrace legal innovation alongside technological innovation. A well-designed AI liability regime in India would not only protect citizens from algorithmic harm but also encourage responsible AI development through legal clarity. In the age of autonomous systems, accountability must remain human.
References
[1] W. Nicholson Price II, Black-Box Medicine, 28 Harv. J.L. & Tech. 419 (2015).
[2] The Information Technology Act, No. 21 of 2000, INDIA CODE.
[3] The Indian Penal Code, No. 45 of 1860, INDIA CODE.
[4] NITI Aayog, Responsible AI: A Discussion Paper (Feb. 2021), https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-Discussion-Paper.pdf.
[5] Ministry of Electronics and Information Technology (MEITY), IndiaAI Portal, https://www.indiaai.gov.in.
[6] Regulation (EU) 2024/1367 of the European Parliament and of the Council of 13 March 2024 on artificial intelligence and amending Regulations (EU) 2022/2065 and (EU) 2022/1925.
[7] See e.g., Huang v. Tesla, Inc., No. 5:19-cv-01463 (N.D. Cal. 2019).
[8] Complaint, Williams v. City of Detroit, No. 2:21-cv-11528 (E.D. Mich. July 21, 2021), https://www.aclu.org/cases/williams-v-detroit.
[9] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).