Artificial Intelligence and Liability: Who Is Responsible When AI Goes Wrong?

Published on: 22nd December, 2025

Authored by: Gungun Agrawal
JECRC UNIVERSITY

Executive Summary

Artificial Intelligence (AI) has moved from being a futuristic buzzword to becoming an everyday reality in healthcare, banking, law, and even governance. With this rise comes an equally pressing concern: what happens when AI makes a mistake? For instance, if a medical AI misdiagnoses a patient, or an AI-driven credit scoring system denies a loan unfairly, or a legal AI tool hallucinates case laws, who bears the blame? The doctor, the bank, the lawyer, the developer, or the AI itself?

This is the heart of the liability debate. Courts and lawmakers across the globe, including India, are trying to apply traditional doctrines like negligence, product liability, and vicarious liability to modern AI harms. At the same time, new laws such as the EU AI Act (2024) and India’s Digital Personal Data Protection Act (2023) are shaping the contours of accountability.

I. Understanding Key Legal Concepts

To navigate the complex terrain of AI liability, several foundational legal principles must be understood. Negligence refers to the failure to exercise reasonable care. For example, a doctor blindly following AI recommendations without applying professional judgment may be negligent. Product liability allows victims to hold manufacturers or vendors responsible if an AI tool or device is defective and causes harm, even without proving negligence.

Vicarious liability means that institutions like hospitals, banks, or law firms may be held accountable for the mistakes of their employees using AI in the course of work. Under India’s Digital Personal Data Protection Act, 2023, any organization handling personal data is designated a data fiduciary with duties of fairness, consent, and security—if an AI system misuses this data, liability attaches. Finally, the duty of care requires professionals to meet a minimum legal standard of responsibility, even when assisted by AI.

II. Global Regulatory Landscape

The European Union has taken a leadership role in AI regulation. The EU AI Act, effective from August 2024, introduces risk-based duties for AI systems, requiring rigorous testing, transparency, and monitoring for “high-risk” applications such as medical devices and financial scoring systems.[1] Complementary proposals, including the AI Liability Directive (still under development), are designed to make it easier for victims to claim compensation by shifting the burden of proof in certain cases.

The United States, by contrast, relies primarily on its existing tort system, negligence and product liability doctrines, combined with sectoral regulations. The 2023 Executive Order on AI mandates transparency and safety standards but does not yet create direct AI-specific liability rules.[2] This patchwork approach leaves much to judicial interpretation and agency guidance.

In India, there is not yet a dedicated law that deals exclusively with AI and liability. Instead, existing frameworks like the Consumer Protection Act, 2019, the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, will be applied to AI-related harms.[3] Additionally, policy documents from NITI Aayog emphasize “Responsible AI” with a focus on safety, fairness, and accountability.

III. Sectoral Applications and Liability Distribution

Healthcare: In the medical sector, responsibility for AI errors typically falls on three parties. First, the treating doctor may be liable for negligence if they fail to exercise independent clinical judgment and blindly follow AI recommendations. The landmark case of Jacob Mathew v. State of Punjab established that medical professionals are liable only if their conduct falls below accepted standards.[4] Applied to AI, this means doctors must use professional judgment and cannot rely solely on algorithms.

Second, hospitals may face vicarious liability for errors that occur under their operational oversight. Third, the manufacturer or vendor of the AI diagnostic tool may be held responsible under product liability principles if the system itself is defective or inadequately tested.

Finance: In the banking and financial services sector, AI systems are increasingly used for credit scoring, loan approvals, and risk assessment. Banks and financial institutions must ensure fairness and transparency in AI-based decisions or face regulatory penalties. If an AI credit scoring system discriminates unfairly or operates on biased data, the institution deploying it cannot escape accountability by claiming the AI acted autonomously. Regulatory frameworks demand that financial institutions maintain oversight and explainability in automated decision-making processes.

Legal Sector: For lawyers and law firms, professional responsibility cannot be delegated to faulty AI tools. If a legal research AI “hallucinates” non-existent case precedents and a lawyer relies on this misinformation in court filings, the lawyer, not the AI vendor, bears primary professional liability. Legal professionals must verify AI-generated content and exercise independent legal judgment. However, if the AI tool itself is fundamentally defective, the vendor may face concurrent liability.

IV. Judicial Precedents Shaping AI Accountability

Several judicial decisions provide important guidance on how courts may approach AI liability. In K.S. Puttaswamy v. Union of India, the Supreme Court of India recognized privacy as a fundamental right.[5] AI systems that misuse personal data or operate without informed consent will face strict constitutional scrutiny under this framework.

The case of Shreya Singhal v. Union of India, while primarily addressing online speech, clarified that intermediaries are not automatically liable unless they have knowledge of or participation in harmful conduct.[6] This reasoning may extend to AI vendors—mere tool providers may escape liability unless they knowingly contribute to harm or fail to address known defects.

Internationally, the U.S. case of Loomis v. Wisconsin highlighted due process concerns when criminal sentencing relied on a proprietary risk-assessment AI tool.[7] Courts flagged the dangers of opaque algorithms in justice delivery, emphasizing the need for transparency and explainability when AI systems affect fundamental rights.

V. The Path Forward: Shared Responsibility and Oversight

Artificial Intelligence is powerful but imperfect, and when it errs, someone must answer. Courts are unlikely to grant AI systems “legal personhood” in the foreseeable future, meaning responsibility will remain firmly with humans and organizations. The challenge lies in balancing innovation with accountability. If liability rules are too strict, AI development may slow; if too lax, victims may go uncompensated.

The path forward requires shared responsibility among all stakeholders. Developers must design safe, tested, and transparent systems. Deploying organizations, whether hospitals, banks, or law firms, must maintain documented oversight, implement contractual safeguards with vendors, and ensure continuous monitoring of AI performance. Professionals using AI must exercise independent judgment and cannot abdicate their duty of care to algorithms.

Across the world, the European Union has taken the lead by establishing comprehensive regulatory frameworks that govern every stage of an AI system’s lifecycle, from design through deployment. The United States relies primarily on existing negligence and product liability doctrines supplemented by agency guidance. India is moving incrementally, using privacy and consumer protection laws as the foundation for emerging AI regulation.

VI. Conclusion

This article has examined the complex issue of liability when AI systems fail, focusing on three major sectors—healthcare, finance, and law, where AI deployment is rapidly expanding. Traditional legal doctrines of negligence, product liability, and vicarious liability continue to provide the foundation for accountability, but these frameworks face increasing pressure as AI decisions become more autonomous and opaque.

The analysis demonstrates that liability typically rests with human actors, including doctors, bankers, lawyers who deploy and rely on AI systems, as well as with manufacturers or vendors when the underlying system is defective. No jurisdiction has granted AI systems independent legal personality, and courts consistently look to human decision-makers when assigning responsibility.

Globally, regulatory approaches vary significantly. The European Union has established the most comprehensive framework through the AI Act and proposed liability directives. The United States maintains a decentralized approach rooted in traditional tort law. India continues adapting existing consumer protection, information technology, and data protection statutes to address AI-specific risks.

The fundamental principle remains clear: AI cannot replace human judgment, it is a tool, and like any tool, it must be used with care and responsibility. The law will continue to evolve as AI capabilities expand, but its core message endures: accountability in AI is never absent, it is only redistributed across the chain of actors who design, deploy, and rely upon these systems.

References

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2024 O.J. (L 1689).
[2] Executive Order 14110 of October 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Nov. 1, 2023).
[3] The Consumer Protection Act, No. 35 of 2019, INDIA CODE (2019); The Information Technology Act, No. 21 of 2000, INDIA CODE (2000); The Digital Personal Data Protection Act, No. 22 of 2023, INDIA CODE (2023).
[4] Jacob Mathew v. State of Punjab, (2005) 6 SCC 1
[5] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 
[6] Shreya Singhal v. Union of India, (2015) 5 SCC 1 
[7] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top