AI and Legal Liability: Challenges of Accountability in Autonomous Decision-Making

Published on 22nd July 2025

Authored By: Swaraj Pandey
Amity University, Lucknow AUUP

Introduction

Artificial Intelligence (AI) is being rapidly integrated into diverse domains – from healthcare diagnostics and financial services to autonomous vehicles and law enforcement – generating major benefits but raising novel legal questions. The autonomous nature of advanced AI systems means they can make complex decisions with limited human intervention[1]. However, when an AI-driven system causes harm, traditional legal concepts of fault and causation may no longer fit neatly. For example, autonomous vehicles have recorded thousands of incidents in recent years. In the U.S., nearly 3,979 self-driving vehicle incidents were reported between August 2019 and June 2024, with roughly 10% resulting in injuries (2% in fatalities). The first known fatality from a self-driving car occurred in 2018, when an Uber test vehicle struck pedestrian Elaine Herzberg. The Uber system “detected the pedestrian 5.6 seconds before impact” but misclassified her and failed to avoid the crash, illustrating how system opacity and multi-party involvement (sensor manufacturer, software designer, human supervisor) made legal fault hard to pinpoint. Such high-stakes incidents underscore a central issue: who bears legal responsibility when AI systems act autonomously and unpredictably?[2]

At present, most jurisdictions have no AI-specific liability statute. Existing laws address AI-related harms only indirectly, through fault-based torts, contract law, or product liability. Under these traditional regimes, claimants must prove human error or defect – an increasingly fraught task when an AI “black box” defies explanation.

Facts

Autonomous Vehicle Accidents: Autonomous and semi-autonomous vehicles have been involved in thousands of crashes. U.S. data (covering August 2019–June 2024) show 3,979 reported incidents of vehicles operating with automated driving systems. Notably, 2022 saw 1,450 such incidents – the highest annual total recorded – and early 2024 had 473 incidents through June. In these cases, about 10% caused personal injury and 2% were fatal. These statistics illustrate that while AI may improve over time, the technology is not error-free. [3]One fact case highlights the challenges in assigning blame: in 2018, an Uber self-driving car struck Elaine Herzberg in Tempe, Arizona, killing her. Investigators found the vehicle’s AI had detected Herzberg but failed to classify her correctly as a pedestrian in time. Although the backup safety driver was distracted, Uber’s own safety oversight was also criticized. Yet legally, blame could not be easily assigned to any single party – manufacturer, programmer, or user. (The incident ultimately led Uber to pause its public road testing.)

Healthcare and Other Critical Systems: AI applications in medicine also carry liability implications. If an AI diagnostic tool misreads an image or suggests a wrong treatment, patient harm could result. In such an event, existing consumer law would treat the AI device as a product, making the manufacturer liable for defects – not the AI itself. For example, under India’s Consumer Protection Act (2019), an AI-based medical device that misdiagnoses could yield a claim against its maker as a “defective product”. Similar scenarios arise with AI in finance or infrastructure: an AI-driven trading algorithm could incur massive losses, or an AI supervising power grids could fail. In each case, responsibility must ultimately lie with human or corporate actors, since current law does not recognize AI as a legal person.[4]

These facts illustrate the core challenge: AI systems can make and execute consequential decisions on their own, causing injury or loss. Yet by design these systems are opaque and adaptive, blurring the link between human choices and outcomes. As one analysis observed, AI’s “complexity, modification through updates or self-learning, and limited predictability… make it more difficult to determine what went wrong and who should bear liability if it does.” In short, autonomous AI creates a “responsibility gap” between harm and blame.[5]

Issue

The central legal issue is liability and accountability for autonomous AI decision-making. Specifically: When an autonomous AI system causes harm, which party (or parties) should be held legally responsible, and under what framework? Traditional liability law asks whether a human actor was negligent or a product was defective, but autonomous AI complicates every element (duty, breach, causation, and foreseeability). Can the AI itself be blamed? (Current law uniformly answers “no,” since AI lacks legal personhood.) If not, liability might fall to creators, deployers, users, or even owners/insurers. But how to apportion it fairly when an AI has “learned” and acted without direct programming for each decision? Moreover, differing jurisdictions have started drafting various rules (e.g. the EU’s forthcoming AI Act and Liability Directive, U.S. state laws, India’s proposals), raising the question of how international norms will align.

Law

Negligence and Fault-Based Liability: In most common-law jurisdictions (e.g. UK, India, U.S.), tort liability for personal injury or property damage is founded on fault. A plaintiff must show a defendant owed a duty of care, breached it, and caused foreseeable harm. Courts usually treat AI systems as objects or services; if a human operator or manufacturer was negligent (e.g. in design, testing or oversight), that party can be liable under ordinary negligence principles. However, as noted above, proving those elements is harder with autonomous AI. If an AI “decides” something unforeseen, even its designer might not have anticipated it. UK commentary observes that in fully autonomous operation “it may become more problematic to establish foreseeability” when an injury occurs. Similarly, under Indian tort law (rooted in Donoghue v Stevenson), a breach occurs if a reasonable person would have foreseen harm. Yet with AI, it may be impossible to trace the exact error to any actor’s wrongdoing.

Strict/Product Liability: To address such gaps, many legal systems impose strict liability for defective products. For example, the EU Product Liability Directive (as incorporated into national law) makes manufacturers strictly liable for damage caused by a defective product. Thus, if an AI device (like a robot or diagnostic machine) is found “defective,” the producer must compensate injured victims even without proving negligence. The Directive explicitly covers any “natural or legal person who is the producer of a finished product” (typically meaning human firms, not the AI agent). India’s Consumer Protection Act (2019) similarly defines goods to include “computer programmes,” suggesting that an AI tool could qualify as a product. In such a regime, a user harmed by an AI misbehaving (e.g. a robot arm dropping a pallet) could claim against the maker as producer of a “defective” product.

Contract and Sectoral Laws: If the AI service or tool is supplied under contract, a breach of warranty or contract term (e.g. a performance promise) could ground liability. However, many AI deployments are free or cloud-based services, making contract claims difficult. Sectoral regulations may apply where specific statutes govern AI use. One example is autonomous vehicles: the UK’s Automated and Electric Vehicles Act 2018 makes insurers responsible for accidents when an insured automated vehicle is driving itself, essentially shifting liability from drivers to insurers. In India, the Motor Vehicles (Amendment) Act 2019 introduced provisions for autonomous vehicles: the owner or operator is generally deemed liable for accidents involving self-driving vehicles. Thus, if an autonomous truck crashes, the owner company faces legal consequences unless it can prove the accident was beyond its control (a shifting of the burden). Other special regimes include cyber-laws: under India’s Information Technology Act, 2000, offenses (like hacking or spreading malware) are defined with human perpetrators in mind, and Section 66 covers computer-related wrongdoing by individuals.

Emerging International and Domestic Laws: Recognizing AI’s challenges, some jurisdictions are adopting AI-specific laws. In the EU, the Artificial Intelligence Act (adopted 2024) classifies AI systems by risk and imposes safety and transparency obligations on providers. Parallelly, the EU proposed an AI Liability Directive (2022) to adapt civil liability rules to AI. This Directive would create a rebuttable presumption of causality: when harm is caused by an AI system, it could be presumed to result from the failure of the provider (manufacturer/developer) to comply with obligations. [6]

In India, the legal framework for AI liability remains largely undeveloped. India’s Consumer Protection Act, 2019 has been interpreted to apply to “goods” and “services” including software, and under its definitions an AI product can be treated like any defective product. The Motor Vehicles Act (2019) provisions effectively hold vehicle owners liable for crashes by autonomous vehicles. But there is no Indian law explicitly addressing AI system defects or autonomous decision-making. The government has issued non-binding guidelines (NITI Aayog’s “Responsible AI” principles, 2021) advocating safety and accountability, but these are not legally enforceable. A draft Digital India Act (2023) reportedly contemplates regulating high-risk AI, but it has not been passed. Indian law does not recognize AI as a legal person, so liability must always “trace back to a human or corporate actor.” This means that even cutting-edge AI harms are processed under existing statutes (e.g. contract law, IT Act, IPC, MV Act) or common-law doctrines, none of which were designed for autonomous AI.

Analysis

The foregoing facts and laws highlight significant accountability gaps. Our analysis examines these issues under four broad themes:

  • Proof and Burden of Causation: Traditional fault-based liability is poorly suited to black-box AI. Courts typically require a plaintiff to show exactly how the defendant’s act caused harm, but autonomous AI often makes that linkage opaque. The EU Commission has noted that “the specific characteristics” of AI “may make it more difficult to offer victims compensation” under current rules. Indeed, if a victim cannot identify whether the AI’s designer, trainer, or end-user was at fault (or if the AI “chose” without human influence), a negligence or contract suit may fail on causation or foreseeability grounds. Proposed solutions include strict liability rules or presumptions. The EU’s pending Liability Directive would impose a “rebuttable presumption” of causality for claimants when an AI system causes damage, shifting the evidentiary burden to the provider. India has not adopted such a rule; Indian claimants must still prove defect or negligence. In practice, this may leave many victims uncompensated. As the Lex ology analysis observes, in a fully autonomous context “nobody at all may be liable” if a defect cannot be traced to a human error.
  • Technical Opacity and Explainability: AI’s “black box” nature adds hurdles. Deep learning systems are inherently opaque to human understanding. The NITI Aayog observes that this opacity makes auditing and debugging difficult. If a neural network misclassifies an image, even its developers may struggle to explain why. Without transparency, regulators and courts cannot easily test compliance or assign liability. This also undermines trust: where law (for example, the EU’s GDPR) grants individuals a right to obtain an explanation of automated decisions, current AI models often cannot comply. In one reported deployment of IBM’s Watson for Oncology, doctors ultimately lost confidence in the system when its recommendations could not be explained, causing them to ignore its advice. Legally, such limitations mean that enforcing existing consumer-protection or malpractice laws requires a degree of algorithmic accountability that is currently absent.
  • Ethical and Regulatory Considerations: The accountability challenge is also normative. Beyond legal liability, there is an ethical imperative that AI systems respect fundamental rights. Automated discrimination (in hiring, lending, policing, etc.) may violate equality laws, but victims might find it hard to bring claims. International guidelines (e.g. UNESCO’s AI Ethics Recommendation) stress that AI should not displace ultimate human responsibility and that systems must be transparent and fair. These norms influence legal thinking but are not directly enforceable without statutes. Regulatorily, jurisdictions are diverging. The EU AI Act and Liability Directive aim to create harmonized rules and a high level of safety, potentially inspiring other countries. By contrast, the U.S. currently has no uniform AI law, relying on a patchwork of liability doctrines and some state laws (e.g. Colorado’s AI Act). India is actively studying global models: NITI Aayog has even cited Singapore’s governance framework and adopted seven AI principles (e.g. safety, transparency, accountability) in its own Responsible AI guideline. Yet India’s new Digital Personal Data Protection Act 2023 notably omits a GDPR-style prohibition on fully automated decisions without human input, signalling a looser approach on explainability. Thus, globally we face a tension between prescriptive regulation (EU’s strict risk-based approach) and a more laissez-faire path (as in the U.S. or still-developing Indian policy). This fragmentation creates uncertainty: a manufacturer of AI medical software, for example, might face different liability rules if its product injures a patient in Germany (EU law) versus the U.S. or India.[7][8]

Figure: Global distribution of AI research publications (2022). The darker countries (e.g. USA, China, India) publish far more AI-related research, indicating where technological capacity and deployment are greatest. This concentration suggests that leading AI nations will largely set accountability standards. Multinational AI systems (e.g. smartphone OS, autonomous vehicles) may thus be governed by the most stringent applicable regulations, but harmonization is needed to avoid loopholes. India, for example, produces significant AI research (see India’s dark shade on the map) yet currently relies on general laws and proposed bills for AI oversight.

In summary, current legal frameworks only partially address autonomous AI. Fault-based liability struggles with unforeseeable algorithmic behaviour. Strict liability (under product law) covers hardware/software defects, but even this presumes a clear causal “defect,” which may not capture systemic AI flaws. Ethical biases in AI (amplifying societal prejudices) pose discrimination risks not easily remedied by existing laws. Moreover, international divergence – between EU’s comprehensive risk-based regime and India’s still-incipient policies – means that the accountability puzzle is being solved in uneven ways worldwide.

Conclusion

AI’s transformative decision-making power brings both promise and peril. Without adaptation, victims of AI-driven harm could slip through the cracks of liability law. Our review shows that traditional doctrines (negligence, contract, product liability) often lack the tools to address autonomous, opaque AI behaviour. This has prompted calls (as reflected in recent EU initiatives) for liability rules tailored to AI – for example, shifting the burden of proof to providers when algorithms fail. Meanwhile, global ethical guidelines insist on preserving human agency and ensuring transparency. In India’s context, the piecemeal application of existing statutes (Consumer Act, MV Act, IPC, IT Act) must be supplemented by clear AI-specific regulations if accountability gaps are to be closed. Ultimately, as the maxim ubi jus ibis remedies (where there is a right, there must be a remedy) demands, law and policy must evolve so that no one is left without recourse simply because the decision-maker was an artificial agent. Reform efforts – from the EU’s new AI Act to India’s contemplated AI laws – will be crucial to assign responsibility fairly and maintain public trust in autonomous systems.

 

References

[1] https://www.lexology.com/library/detail.aspx?g=5443494e-ad73-43b5-8e89-a3784a07a636#:~:text=The%20specific%20characteristics%20of%20these,limited%20or%20no%20human%20intervention

[2] https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf#:~:text=19%20Box%207%3A%20The%20case,illuminated%20by%20the%20street%20light

[3] https://www.craftlawfirm.com/autonomous-vehicle-accidents-2019-2024-crash-data/#:~:text=,have%20resulted%20in%20a%20fatality

[4] https://ijirl.com/wp-content/uploads/2025/03/FRAMEWORK-FOR-ADDRESSING-LIABILITY-AND-ACCOUNTABILITY-CHALLENGES-DUE-TO-ARTIFICIAL-INTELLIGENCE-AGENTS.pdf#:~:text=providers%20for%20defective%20goods%20or,CPA%2C%20not%20the%20AI%20itself

[5] https://www.lexology.com/library/detail.aspx?g=5443494e-ad73-43b5-8e89-a3784a07a636#:~:text=The%20specific%20characteristics%20of%20these,limited%20or%20no%20human%20intervention

[6] https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en#:~:text=On%2028%20September%202022%2C%20the,AILD

[7] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics#:~:text=The%20protection%20of%20human%20rights,human%20oversight%20of%20AI%20systems

[8] Stanford HAI 2022 AI Index Report – for global AI adoption statistics.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top