WHO’S TO BLAME? THE LEGAL PUZZLE OF AI-DRIVEN DECISIONS

Published on 26th July 2025

Authored By: Akshat Singh
KR Mangalam University​

Abstract

The legal issues raised by the growing application of Artificial Intelligence (AI) in autonomous decision making are examined in this article. As AI is incorporated into fields such as healthcare, criminal justice and transportation, concerns regarding who is liable for harms caused by these systems have emerged. The central legal issue addressed is the difficulty of assigning accountability in the absence of human intent or legal personhood – Two fundamental pillars of conventional liability doctrines. The article starts out by outlining how AI works particularly in autonomous contexts, and why its opaque, unpredictable nature complicates legal scrutiny. The shortcomings of current legal system including tort, criminal and product responsibility law, in addressing AI-related problems are then examined. Using real-world examples and comparative analysis, the article explores how countries such as the US, European Union and India are addressing these issues. Key recommendations include the adoption of AI-specific liability laws, development of algorithmic audit mechanisms, consideration of risk-based regulation, and exploration of limited legal personhood for high-risk AI systems. The article concludes by calling for an adaptive legal framework that balances innovation with accountability in a rapidly evolving technological landscape.

Introduction: Rise of Autonomous Intelligence

Artificial Intelligence has transformed industries and economies globally. AI has become integral in decision making processes, from recommending products in e-commerce to establishing creditworthiness in financial services, diagnosing medical ailments, and even public policy through predictive analytics. In India, the adoption of AI technologies has been particularly rapid in sectors such as healthcare, transportation, finance, and governance. However, AI comes with significant legal challenges, particularly around accountability, transparency, bias and human rights. Unlike traditional tools, many AI work with a degree of autonomy and unpredictability, evolving through machine learning algorithms and without direct human intervention. As a result, it is difficult to apply core fundamentals of legal liability – such as intention (mens rea), control, and foreseeability—are difficult to apply. If an AI system makes a decision which results in harm, damage, injury, who is to blame? The developer? The user? The data trainer? Or should the AI itself be held accountable in some way?

Demystifying AI: Technology Behind The Legal Question

To grasp the legal issues posed by artificial intelligence (AI), it is essential to grasp how these systems function and the reasons they complicate traditional concepts of liability. Artificial Intelligence includes a diverse array of computer systems that are created to perform tasks typically requiring human cognitive skills—such as reasoning, learning, perception, and decision-making. Within this domain, machine learning (ML) and deep learning act as subfields that allow systems to learn from data patterns and improve their performance over time without requiring specific programming for each situation. AI systems are typically categorized into two types: narrow AI, which focuses on particular tasks (like spam filters and recommendation engines), and general AI, which has wider cognitive capabilities (mostly theoretical at this point). The greater the autonomy a system has in its learning, the more challenging it becomes to predict or control its outcomes. This gives rise to the “black box” problem, where the internal decision-making processes of complex AI models—especially deep neural networks—become obscured even to their creators. The legal challenge emerges when these systems are employed in crucial domains: a self-driving car making a pivotal choice related to life or death, or an algorithm that determines whether to approve or reject a loan or grant parole. In such scenarios, the absence of transparency and traceability makes it difficult to assess responsibility or identify the source of the mistake. Additionally, AI systems do not possess intent, awareness, or moral accountability, which creates an even larger gap from the human-centric standards that underlie legal systems. By elucidating the mechanisms and development of AI, this section paves the way for comprehending why traditional legal frameworks struggle to tackle decisions made by machines—and underscores the pressing need for a transformation in legal viewpoints.

Legal Liability and the Traditional Doctrinal Framework:

Traditional legal structures, which rely on human actions, may not sufficiently capture the complexities of actions taken by AI. This highlights the necessity for well-defined regulations regarding liability, particularly in instances where AI systems function independently. The issue of responsibility becomes even more critical when taking into account AI’s capacity to learn and evolve over time.
If an AI system makes decisions based on its own “learning,” rather than direct human programming, determining liability becomes challenging. Should accountability rest with the developer, the user, or the AI system itself?
Tort Law
In tort law, liability typically arises when a person neglects to fulfill a duty of care, leading to predictable harm. When examining AI, this raises significant questions: Is it possible to hold the creator or operator of an AI system responsible for its autonomous actions? Courts have sometimes integrated product liability principles to software, viewing AI as a “product.” However, the constantly evolving nature of AI—especially due to its self-learning features—challenges the concept of static manufacturing defects or anticipated usage behaviors. This complicates the assessment of negligence concerning design or the provision of adequate warnings.
 Criminal Law
Criminal liability is dependent on establishing mens rea (guilty mind) and actus reus (guilty act). Since AI systems lack intent or moral agency, assigning criminal responsibility becomes complex. Can a programmer be charged criminally if an AI behaves in an unforeseen way? Courts may adapt concepts like constructive liability or recklessness, but these might not suffice in scenarios where autonomy and unpredictability are key factors.
Contract Law
AI is increasingly employed in the creation, negotiation, and execution of contracts (such as smart contracts or algorithmic trading). Legal issues arise when these systems misrepresent information or execute unintended terms. Existing principles regarding mistake, misrepresentation, or agency law may not adequately resolve disputes when AI functions beyond its designed limits.

Vicarious Liability and Duty of Care

One potential resolution is the vicarious liability principle, which holds employers or operators responsible for the actions of those (or that) under their supervision. However, AI systems—especially those operating in dynamic environments—often behave in ways that humans do not directly influence. This undermines the traditional employer-employee analogy. In the end, these legal frameworks—originally constructed around human-centered principles—struggle to fill the accountability gap created by autonomous AI. As AI continues to gain traction, the legal system must reevaluate how responsibility is allocated in a landscape where not all agents are human.

Accountability gaps in the current legal framework:

Despite the sophistication of existing legal doctrines, a substantial accountability gap emerges when harm is caused by autonomous AI systems. Traditional legal tools assume that a liable party will either be a natural person or a legal entity with a clear causal connection to the wrongful act. However, with AI, causation is often diffuse, and intent is non-existent, leading to a “responsibility vacuum”.
The Black Box Dilemma
AI technologies, particularly those utilizing deep learning methods, often function as “black boxes,” making it difficult even for developers to explain how a particular result is generated. This uncertainty hinders the determination of causation, responsibility, or predictability. Legal frameworks that depend on traceable logic and anticipated behaviors struggle to address this lack of clarity.
Absence of Legal Personhood, Absence of Liability
Unlike corporations, AI technologies are not recognized as legal entities. They cannot be sued, penalized, or held accountable in court. Nevertheless, their decisions can have direct consequences—such as denying loan requests, influencing parole decisions, or misdiagnosing health conditions. In the absence of legal recognition or intent, AI cannot take on accountability, resulting in the responsibility having to shift to a human actor—usually lacking in clarity or fairness.
Blurry Lines of Responsibility
Autonomous systems usually involve the joint efforts of various stakeholders: developers, data trainers, platform providers, and end-users. In these scenarios, identifying who should bear responsibility becomes complex. Courts often struggle to determine liability when an algorithm developed by one organization behaves unexpectedly under the control of another.
Regulatory Lag
The rapid pace of technological development greatly outstrips the creation of legal standards. At present, in many regions, there is no specific legislation for AI that directly addresses liability, accountability, or regulatory compliance. Regulatory bodies often rely on outdated laws concerning consumer protection, IT, or product safety—none of which adequately take into account the autonomy or learning capacities of AI.
These deficiencies highlight the urgent need for innovative legal frameworks that can accommodate the unique characteristics of AI systems. Without intervention, the legal structure risks allowing harm without providing solutions—diminishing both accountability and public trust in AI technologies.

Comparative Legal Approaches:

Legal frameworks around the world are increasingly facing the regulatory and liability challenges posed by artificial intelligence, although the methods used differ in terms of scope, maturity, and foundational beliefs. A comparative analysis of the European Union (EU), United States (US), and India reveals both common issues and distinct legal approaches to AI accountability.

European Union:
Preventive and Cautious The EU has taken a preventive approach, emphasizing safety and fundamental rights in its AI regulations. The EU AI Act, which was enacted in 2024, classifies AI systems according to their risk levels (for instance, minimal, high, unacceptable) and enforces more rigorous compliance requirements for applications categorized as high risk. The Act mandates transparency, human oversight, and effective risk management practices. In terms of liability, the Proposed AI Liability Directive (2022) seeks to unify the rules among member states, transferring the burden of proof to developers in specific high-risk situations. It also revises product liability laws to better reflect the evolving nature of software and algorithms.
United States:
Fragmented and Market-Driven In the US, there is no overarching federal legislation on AI, resulting in a fragmented, litigation-focused regulatory environment. Various agencies (including the FDA, FTC, and NHTSA) govern AI within their respective domains. Courts typically depend on existing tort and contract laws, often treating AI as a product or tool rather than an independent entity. The legal culture in the US favors innovation and generally aims to avoid rapid regulations, allowing AI to progress swiftly while leaving accountability concerns largely unaddressed. Recently, debates concerning algorithmic bias and automated decision-making in contexts like hiring, law enforcement, and lending have led certain states (such as California and Illinois) to propose or implement specific AI laws.
India:
Cautious Yet Early India is in the early stages of developing AI governance. The National Strategy for AI (NITI Aayog, 2018) supports AI initiatives that benefit society, yet it currently lacks enforceable regulatory or liability structures. Legal responsibility for damages caused by AI is primarily addressed—if recognized—under general statutes like the Information Technology Act of 2000, consumer protection laws, and tort regulations.
Judicial precedents are sparse, and the regulatory dialogue is still evolving. However, India acknowledges the importance of AI ethics and is working towards a regulatory framework that balances the promotion of innovation with risk mitigation. Legal recognition of algorithmic harm, particularly in sectors such as finance and healthcare, is anticipated to increase in the coming years.

Proposed solutions and emerging legal models:

The lack of clear legal accountability for AI necessitates creative solutions that adapt existing legal principles while incorporating new regulatory frameworks. Academics, lawmakers, and technology experts have suggested various methods to ensure accountability without hindering progress. This section examines the most promising legal and regulatory approaches currently being discussed.
1. Regulation Based on Risk Drawing inspiration from the EU’s AI Act, risk-based regulation categorizes AI technologies according to their potential to inflict harm. High-risk systems—such as those utilized in healthcare, law enforcement, or self-driving vehicles—would encounter more stringent legal requirements, including mandatory evaluations, impact analyses, and human oversight. This strategy enables customized regulation without overwhelming low-risk innovations.
2. Transparency and Auditability in Algorithms Requiring clarity in AI outcomes could improve legal examination. By compelling developers to create systems with traceable decision-making (such as “white-box” models or explainable AI/XAI), legal authorities and regulators can more effectively determine whether harm was predictable or preventable. Third-party audits of algorithms could become a regular compliance requirement for essential applications.
3. Imposing Strict or Vicarious Liability Certain experts advocate for the application of strict liability to individuals or entities that deploy or profit from AI technologies, regardless of culpability. This approach mirrors liability standards associated with inherently perilous activities (like the use of explosives). Alternatively, vicarious liability could extend to developers, employers, or platform owners—treating AI as an agent acting under human direction.
4. Electronic Personhood (Debatable) One radical yet contentious suggestion is to endow advanced AI systems with limited legal personhood. This concept, akin to corporate personhood, could permit AI systems to be held accountable, insured, or penalized independently. Nevertheless, it presents philosophical and ethical dilemmas regarding the comparison of machine agency to human or corporate agency, and does not have widespread acceptance.
5. Regulatory Sandboxes and Flexible Governance Governments can utilize regulatory sandboxes to allow developers to trial high-risk AI in controlled settings with real-time legal oversight. This approach fosters innovation while enabling regulators to adjust standards based on practical experiences. Adaptive governance models focus on ongoing rule-making, consultation with stakeholders, and adaptability to technological advancements.
Together, these suggestions can help foster innovation and ensure accountability, creating balance in a world where decisions are no longer solely made by humans.

Conclusion: Rethinking Legal Accountability in the age of AI

The realm of Artificial Intelligence has transitioned from a purely theoretical idea to a practical instrument, with systems now capable of making complex, independent decisions that can profoundly affect individuals’ lives. Yet, as AI technologies progress, they are advancing faster than the legal structures designed to govern human behavior. This has created a significant accountability gap—where harm occurs, but there is no clear legal entity that can be held responsible under existing laws. This article has explored the limitations of traditional liability frameworks, including tort, criminal, and contract law, concerning autonomous AI systems. It has also highlighted the inherent lack of transparency and unpredictability in machine learning, which undermine key legal concepts such as foreseeability, control, and intent. Comparative analysis from the EU, the US, and India shows that, despite differing regulatory approaches, all regions face similar challenges in updating their legal systems to address autonomous agents. To address these issues, the article proposes a comprehensive strategy: implementing risk-based regulation, mandating algorithmic transparency, altering liability standards, and encouraging international legal harmonization. While more radical concepts like electronic personhood are still under discussion, pragmatic solutions—such as regulatory sandboxes and independent evaluations—offer promising paths forward. Ultimately, ensuring accountability in AI decision-making is not merely a legal obligation but also a societal need. If legal frameworks do not evolve alongside technology, it risks the ability of victims to attain justice and the understanding of developers’ responsibilities. A reimagined, adaptable legal framework is essential—one that acknowledges the autonomy of AI systems while providing safeguards against diminishing accountability.




Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top