Published On: September 30th 2025
Authored By: Hariom Awasthi
Shri Ram Institute of Law
Abstract
Artificial Intelligence (AI) is reshaping legal, economic, and social contexts by fostering machines capable of independent decision making. While the advantages of AI systems are clear (efficiency, innovation, etc.), they create significant questions of liability and responsibility once harm occurs. For example, who is liable when an AI autonomous vehicle causes an accident; who is responsible if an algorithm grants unwarranted unfavourable credit decisions? This article looks at current challenges of legal liability arising from AI decision-making. It will consider various frameworks – tort, contract, and criminal law, different approaches to regulation, and propose various solutions, including risk-distribution schemes, different liability regimes specific to AI, and the prospect of affording AI limited legal personality.
Introduction
Artificial Intelligence has advanced beyond functions such as predictive text and chatbots, into autonomous vehicles, medical diagnostics, algorithmic trading, and criminal sentencing software. With these advances, the significant legal question remains: who is liable when AI acts autonomously and causes harm? Traditional liability doctrines are premised on human intention (or disregard), and reliance on foreseeability neither makes sense for AI’s opacity, autonomy, and unpredictability.¹
Courts and legislatures worldwide are grappling with whether existing laws can adapt or whether new legal regimes are necessary. In this article, we examine key challenges in assigning liability for autonomous decision-making, survey global approaches, and propose pathways for reform.
I. Conceptual Challenges in AI Liability
- Autonomy and the “Black Box Problem”
Contemporary AI, specifically deep learning, utilizes algorithms that even their creators cannot fully explain their internal decision making. This “black box problem” is problematic for liability since it eliminates foreseeability, an important element of negligence law.² - The Question of Legal Personality
One solution that has been debated is that AI systems should be afforded some degree of “electronic legal personality.” Supporters argue that since corporations are legal persons, AI can also have rights and obligations..³ Detractors argue it will merely insulate human actors (manufacturers, developers, deployers) from liability.
II. Liability in Different Legal Frameworks
- Tort Liability
In negligence cases, you need a breach of duty, causation and damage. With AI, the problems are in the areas of establishing duty of care and foreseeability.⁴ For example, in accidents involving autonomous vehicles, who is liable, the driver, the manufacturer, or the software developer?
Product Liability: Courts may hold manufacturers liable under strict liability if the AI product is defective. The complication is that AI products are continually changing, and it is uncertain whether a resulting injury arose from a defect in the original design or from the later functioning of the AI learning autonomously.⁵
Vicarious Liability: Employers may be liable when the AI tool makes a mistake in the course of its business responsibilities, but this assumes the AI is acting like an employee rather an autonomous agent.⁶ - Contractual Liability
AI systems are becoming increasingly common for concluding contracts (e.g., algorithmic trading). As contracts are formed or performed internally by AI, disputes arise, and at least externally, there was no requirement for direct human involvement between the parties to the contract. Currently, law makes the human principal liable for the use of AI,⁷ but does not currently address instances when an AI creates obligations that are not foreseeable. - Criminal Liability
Can AI commit a crime? Criminal laws require actus reus (guilty act) and mens’ rea (guilty intention). And AI, by definition, cannot have consciousness, and on that basis alone, direct liability is an intellectual question. The current law imposes liability on the natural person or the legal person who used the AI.⁸There are, however, some remaining concerns if human involvement is too negligible..
III. Comparative Perspectives
European Union
The EU is taking a proactive approach. The draft Artificial Intelligence Act of 2021 has a risk-based regulatory scheme for high-risk AI “systems” with obligations for compliance to put these “systems” into use.⁹ The EU mentioned the optionality of an “Electronic Personality” (as a way to develop a legal interface with an autonomous AI),¹⁰ but this proposal was contentious.
United States
The US is relying on existing tort law and contract law. Courts usually attribute the liability (generally through a product liability lens) to the developer or user.¹¹ The lack of explicit state or federal laws referencing AI has raised uncertainty about outcome inconsistencies.
India
India has no specific legislation related to AI. However, liability could be evaluated based on existing laws like the Consumer Protection Code, the General Data Protection Law, and the Code of Civil Procedure. Other options exist, such as recommendations from the NITI Aayog discussion paper (2018) which encouraged AI regulation, but did not extend to moral or affirmative recommendations regarding liability reform.¹²
IV. Case Law and Emerging Jurisprudence
- Uber Self-Driving Car Fatality (2018, U.S.) – Automobile insurance, which has an implied fault determination based on statutes and case law, indicates that its status as a ‘control’ tort, i.e., allowing for an ascriptive standard of liability, had questions of wrongdoer status. Such as whether the liability is recommended to the AI, the driver, or Uber to the manufacturer? The criminal charges laid against the driver demonstrated an aversion towards extending liability to AI directly.¹³
- Loomis v. Wisconsin (2016, U.S.) – When AI aid (here, COMPAS – risk-assessment algorithm) was used in sentencing, this raised due process concerns about the level of transparency in algorithmic development and operational attributes. The courts upheld the use of COMPAS but recognized an inherently present danger in terms of opacity in regard to the use of AI in a criminal justice context.¹⁴
- European Parliament Resolution (2017) – In the European Parliament Resolution (2017) regarding liability regarding intelligent agents, suggested legal personality for sophisticated autonomous robots showing the global concerns regarding the status of that AI.¹⁵
V. Policy Considerations and Proposed Solutions
- Risk Distribution Models
The question of liability could/should therefore be shared across participants (developers, deployers, users), ie, an implicative matter of control and benefit. Noteworthy is the recommendation to mandate insurance schemes (similar to motor vehicle insurance) for high-risk but regulated AI systems.¹⁶ - AI-Specific Legislation
Can general tort or contract doctrine capture the issue? Alternatively, an AI-specific, liability framework could ensure strict liability for prescribed harms and demand transparency for stated decisions.¹⁷ - Limited Legal Personality for AI
Some scholars advocate a more limited form of legal personhood for AI systems, allowing AI systems to hold property, enter contracts, or be insured. This strongly contested position has merit because it protects victims of an autonomous agent without letting human actors off the hook.¹⁸
Conclusion
To identify legal solutions surrounding AI practices, we must wrestle with AI’s threat to core principles of liability based on human decisions and review the legal consequences of autonomous, opaque RPA systems. While tort, contract, and criminal law still allow for some redress, AI systems challenge these prohibitive classes and in many cases overwhelm their utility. Regulatory models from other jurisdictions underscore for us the varied approaches. The EU seems to be focused on regulation and a precautionary table of contents. The US is authorised growth through an on-going litigation basis, while India is still in discovery mode.
We need a broad strategy to address the contemporary issue: we need to establish frameworks for distributing risk or liability across constellations of stakeholders; we need sector-based AI liability legislation; and we need strategies for policing algorithmic bias. Full legal personality for AI is a bridge too far at this stage, but limited systems might provide mechanisms of accountability linked to innovation (i.e. insurance backed liability). As AI becomes more pervasive, legal systems will need to adapt to ensure the progress of technology does not infringe on our commitments to justice and responsibility.
References
¹ Roger Brownsword, Law, Technology and Society: Re-Imagining the Regulatory Environment 88–92 (2019).
² Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information 7–10 (2015).
³ European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics, P8_TA(2017)0051 (Feb. 16, 2017).
⁴ James Goudkamp, Tort Law Defences 35–38 (2013).
⁵ Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts 55–60 (2013).
⁶ Guido Noto La Diega, Artificial Intelligence and Tort Law: A Comparative Perspective, 9 J. Eur. Tort L. 1, 12–16 (2018).
⁷ Jason J. Killmeyer, Contracting with AI Agents: Legal Questions, 23 Rich. J.L. & Tech. 4, 18–23 (2017).
⁸ Gabriel Hallevy, When Robots Kill: Artificial Intelligence Under Criminal Law 44–47 (2013).
⁹ European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
¹⁰ Id. at 12.
¹¹ Bryant Walker Smith, Automated Driving and Product Liability, 2017 Mich. St. L. Rev. 1, 26–32.
¹² NITI Aayog, National Strategy for Artificial Intelligence (June 2018), https://www.niti.gov.in/national-strategy-artificial-intelligence.
¹³ Cade Metz, Self-Driving Uber Car Kills Pedestrian in Arizona, N.Y. Times (Mar. 19, 2018).
¹⁴ State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
¹⁵ European Parliament, supra note 3.
¹⁶ Thomas Burri, Liability for Artificial Intelligence and EU Product Safety Regulation, 10 J. Eur. Tort L. 15, 20–23 (2019).
¹⁷ Id. at 25.
¹⁸ Shawn Bayern, The Implications of Modern Business Entity Law for the Regulation of Autonomous Systems, 19 Stan. Tech. L. Rev. 93, 111–15 (2015).
¹⁹ Sandra Wachter, Brent Mittelstadt & Luciano Floridi, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, 7 Int’l Data Priv. L. 76, 78–82 (2017).