AI and Legal Liability: Navigating the Emerging Landscape

Published on: 8th April, 2026

Authored by: Anju Chaudhary
Thakur Ramnarayan College of Law

Abstract

Artificial Intelligence (AI) is changing many of the industries. Artificial Intelligence (AI) also brings the challenges, about the liability. I have been watching these changes. This article asks who is responsible when AI systems cause harm when AI systems cause errors or when AI systems cause breaches. This article looks at the rules and the emerging legal rules. This article looks at product liability looks at negligence looks at property looks at bias looks at discrimination and looks at cybersecurity. I see that the challenges, in figuring out who is responsible come from the AI being hard to understand and many parties being involved. Case studies, such as vehicle crashes and generative AI copyright fights show the impact. Regulatory responses, like the EU AI Act and U.S. Proposals try to force accountability. Legal and moral hurdles stay. The article says that flexible, based on results liability models are needed to keep innovation alive while protecting people. The article also expects court cases and more special insurance. I think the guide uses analysis and citations to back up the points. I think the guide gives direction for stakeholders navigating AIs landscape. I think the guide stresses the need, for transparency, human oversight and international cooperation. I think the guide says that transparency, human oversight and cooperation risk and let people use AIs benefits.

INTRODUCTION

 Artificial Intelligence (AI) has grown from a tech curiosity to a part of modern innovation. Artificial Intelligence (AI) now spreads into the healthcare sector the transportation sector, the finance sector and the entertainment sector. Artificial Intelligence (AI) can look at data sets learn from patterns and make decisions on its own. Artificial Intelligence (AI) gives the world speed and new ideas. I see Artificial Intelligence (AI) in parts of my life. Artificial Intelligence (AI) also brings risks. The risks include mistakes, bias and side effects that can hurt individuals, societies and economies. Legal liability sits at the center of these risks. I keep asking who takes liability when AI systems break cause accidents or violate rights?

 The legal liability, in AI contexts has parts. The legal liability comes from tort law, product liability, intellectual property (IP) and new regulatory rules. Unlike tools AI relies on machine learning algorithms, decision making and data driven training. AI creates challenges. I notice AI can act like a box. The black box behavior means the inside of AI is not clear. The unclear inside makes it hard to decide who is, at fault. When AI makes a decision the way AI got there often stays hidden so the court has to work to find responsibility.

AI liability is the focus of this article. The article looks at the parts of AI liability. The article looks at the areas the challenge of assigning blame real case examples, the rules that respond to AI liability and the future effects. The article wants to give a picture of how the legal system adapts to AI. The legal system is learning to work with AI and to fit the AI liability rules. I wrote this article to help readers see AI liability and how the legal system is changing for AI. Throughout history liability laws have changed as technology changed. The Industrial Revolution brought rules, for machine safety. The digital age brought data privacy laws. AI brings a change that mixes machine actions. As AI systems become more autonomous people ask if AI systems should be called tools, products or something, like entities. I look at court cases and academic studies. Those sources show that we need rules that keep ideas safe while holding people responsible.

Key Areas of Liability

 I have seen that AI liability touches fields. AI liability brings its challenges. Chances, for redress, in each field. Below we list the areas. We give examples. We point to the principles that apply.

Product Liability

In my experience AI products such, as vehicles, robotic surgical assistants or predictive analytics software become defective when the AI product does not work as the AI product is intended. Under product liability laws manufacturers face liability. Strict liability means the law holds manufacturers responsible, without proof of negligence. The AI product causes harm because of design flaws, manufacturing flaws or labeling flaws.

In the United States the Consumer Product Safety Act (1972) [1]is a law that lets people bring claims when a product is unsafe. State laws such, as Californias Song-Beverly Consumer Warranty Act also give people ways to bring claims. I see that the Consumer Product Safety Act and the Song-Beverly Consumer Warranty Act apply to AI well. The Consumer Product Safety Act and the Song-Beverly Consumer Warranty Act treat defects like any product defect. For example, an AI-powered medical device can misdiagnose a patient if the training data are flawed. In that case the manufacturer can be liable, for injuries. The EUs Product Liability Directive, from 1985 [2]amended later puts the liability on producers for products. The EU AI Act of 202[3]4 adds AI changes, to the EUs Product Liability Directive.

 In my view the challenge is proving causation. I ask whether the harm comes from the AIs design or, from the user error. I see that the courts begin to address the challenge. I see that the courts talk about the foreseeability standard.

 Negligence and Duty of Care

Negligence claims need a breach of duty. Negligence claims need causation. Negligence claims need damages. In AI deployment the burden often falls on the users or the operators who do not use care. In healthcare physicians who use AI tools must check the outputs because AI is not perfect. The case Estate of McCall v. United States (2019) [4]involved a VA hospital AI system. The VA hospital AI system failed to flag a veteran’s cancer risk. The failure of the VA hospital AI system caused a death claim. I think the AI developer did not receive blame. I think the AI developer situation showed the duty of care for oversight.

 In systems such, as drones or self-driving cars operators can be responsible, for not monitoring. Operators must watch the system all the time. Algorithmic negligence is an idea that is growing. Algorithmic negligence means that when operators do not reduce the known AI bias, that is a breach. Operators who ignore the bias break the law.

 Intellectual Property and Data Rights

Developers train AI systems, on data sets that often include copyrighted or proprietary material. The use of copyrighted or proprietary material raises IP infringement issues. The U.S. Copyright Offices 2023 ruling in Thaler v. Perlmutter[5] denied copyright protection for AI-generated art because AI-generated art lacks authorship. Lawsuits against OpenAI in 2023 say that models, like GPT infringe on authors works by copying styles or content. I see that the courts are still figuring out how to handle AI.

 Data privacy adds another layer. The General Data Protection Regulation, the GDPR in the EU [6]holds AI users responsible, for mishandling data.The GDPR can find a company, up to four percent of turnover. In the United States the California Consumer Privacy Act, the CCPA[7]and the proposed American Data Privacy and Protection Act the ADPPA give protection. The CCPA and the ADPPA protect data from AI misuse. Breaches can trigger class-action suits. The 2021 Equifax data breach shows how AI vulnerabilities made the damage worse. The Equifax data breach shows that AI vulnerabilities can increase the fallout from a breach.

Bias and Discrimination

 I notice that AI can inherit bias, from the training data. The bias can cause outcomes. I see that legal recourse often comes from anti‑discrimination laws. I see that in the U.S. Title VII of the Civil Rights Act (1964) prohibits employment discrimination. I see those cases like EEOC, v. ICIMS (2022) [8]targeted hiring algorithms that favored demographics.

 I see that the EUs Racial Equality Directive and the AI Act require bias audits, for the high‑risk AI. I notice that the UNs Guiding Principles, on Business and Human Rights stress that companies must take responsibility for AI harms. The remedies that are offered include the audits and the compensatory damages. Proving intent or proving bias stays hard.

 Cybersecurity and Hacking

 I see that attackers can target AI systems with inputs that change the results. Developers may face liability for designs. The NIST Cybersecurity Framework[9] (updated 2024) helps manage risk. A breach can cause a negligence claim or a regulatory penalty, under the U.S. Computer Fraud and Abuse Act (CFAA). In 2023 the MOVEit transfer software hack affected millions. I saw how the MOVEit transfer software hack made AI-integrated systems push cyber risks higher. Insurers are developing policies. Attribution stays murky in supply chain attacks.

Challenges in Attribution

 I think liability, in AI is very hard to assign because the technology has built‑in quirks. The AI is a box. The AI does not give reasons for its choices. That makes it hard to trace an error to a cause. Is the fault, in the algorithm design the training data, the deployment environment or human misuse?

 I have observed that developers (tech companies) deployers (businesses) users (e.g. consumers) and data providers all share responsibility. When responsibility is spread across developers, deployers, users and data providers buck-passing can happen. Buck-passing means that no one takes the blame. In AI accidents manufacturers can say that users were negligent. Users can say that the design was flawed.

 Jurisdictional issues make the problem harder. AI works over the world crossing borders through cloud computing. That means we need laws that match across countries. New ideas such, as AI personhood propose giving rights to AI. Scholars such, as David B. Wilkins have suggested AI personhood. AI personhood stays in theory. Has not been adopted.

 Ethical dilemmas arise. The question of whether AI should be held accountable, like humans or treated as a tool remains unsettled. Frameworks such as the IEEE Ethically Aligned Design[10] emphasize transparency. The enforcement of the framework’s lags. Tools such as AI (XAI) are becoming more common. Explainable AI may reduce the lack of clarity, in AI decisions. Explainable AI may help scrutiny of AI.

Case Studies

I have seen world examples. These examples show AI liability, in action. They give the courts a picture of how the courts respond. They also give the regulators a picture of how the regulators respond to AI liability.

Autonomous Vehicles

I have watched the news, about Teslas Autopilot system and the lawsuits. Teslas Autopilot system is at the heart of fights about responsibility. In Olmstead v. Tesla (2023) [11]a Florida jury awarded $25 million to a family after a crash. Said Tesla was negligent because Tesla gave poor warnings about Teslas Autopilot system limits. Olmstead, v. Tesla (2023) highlighted product responsibility. Experts testified about software failures. The EU AI Act classifies vehicles as risk. The EU AI Act requires safety certifications.

 Medical AI

 I read that IBMs Watson, for Oncology came under look in D’Angelo v. IBM (2021). [12]In that case a patient sued after Watson recommended treatments. The court threw out the claims against IBM because the court saw Watson as a tool. The hospital got blamed for relying much on Watson. The duty of care for human-AI collaboration is clear. I also read about the 2019 case where Googles AI misdiagnosed retinopathy. That mistake sparked reviews but no lawsuits. The need for validation protocols is clear, from the 2019 case.

Generative AI and IP

 OpenAIs ChatGPT started the copyright fight in 2023. Authors such, as Sarah Silverman filed a lawsuit for infringement. The lawsuit is called Silverman v. OpenAI [13]Silverman v. OpenAI says OpenAIs ChatGPT trained on pirated books. That could break the fair use rules in the U.S. Copyright law. I think the case shows the debate about AIs data hunger. I also think the ethics of scraping content are, in question. I see these cases show a pattern. The liability often shifts to the elements. When the AI autonomy grows the direct accountability may increase.

 Regulatory Responses

 I notice that governments, across the globe are writing AI regulations. I think governments write AI regulations to close the liability gaps. The liability gaps need rules. So governments create AI regulations.

EU AI Act

 I read that the EU AI Act started in 2024 and the EU AI Act puts AI into three risk groups. The first group is unacceptable, for example scoring. The second group is high‑risk for example healthcare. The third group is low‑risk. For high‑risk systems the EU AI Act imposes liability. The EU AI Act requires conformity assessments, transparency and human oversight. If a company does not follow the EU AI Act the EU AI Act can fine the company, up to €35 million or 7 percent of its turnover. The EU AI Act follows the GDPR and the EU AI Act wants to make the rules the same across all member states.

U.S. Initiatives

 I notice that the Algorithmic Accountability Act (2022) [14]says you have to do impact assessments, for automated decision systems. The Algorithmic Accountability Act (2022) looks at bias and fairness. I also see those states such as Illinois passed the Artificial Intelligence Video Analysis Act (2023)[15]. The Artificial Intelligence Video Analysis Act (2023) requires audits for AI in surveillance. I see that the Federal Trade Commission (FTC) used its powers to punish misleading AI practices. The Federal Trade Commission (FTC) did that in the 2023 settlement with a company, for credit scoring.

 International Efforts

 I think the OECD AI Principles (2019)[16] promote AI and stress liability, for harms. I think the G7 Hiroshima AI Process (2023) calls for standards. I think in Asia Chinas AI Governance Framework (2023) stresses data security and ethical use and adds penalties, for violations. These regulations show a shift, toward liability. I see a shift. People apply the regulations differently in each place. People enforce the regulations with difficulty.

Future Implications

 I think as AI gets more advanced the liability rules will probably move toward outcome-based models. I think the outcome-based models will put compensation before fault finding. I see those innovations such, as XAI and federated learning can improve traceability. I see that innovations such as XAI and federated learning can cut down on black box problems. I worry that overregulation can hold back innovation. I have heard that industry groups such, as the AI Alliance have warned about overregulation.

 Class-action suits may rise because biased AI hurts people. The courts see class-action suits when biased AI creates harm for many. Insurance markets are shifting. The insurance markets now sell policies that cover AI errors just like cyber liability policies cover cyber-attacks. Ethically we must balance AIs benefits, with AIs risks. The balance requires collaboration, across fields. It matters. I think lawyers, engineers and ethicists need to work.

In the end the AI liability field is rapid developing field. Needs legal strategies. The transparency, the accountability and the international cooperation help the society use AIs cut harms. The ongoing. The ongoing dialogue shape fair outcomes. Ongoing research and dialogue will be crucial for equitable outcomes.[17]

[1] U.S. Consumer Product Safety Act, 15 U.S.C. §§ 2051–2089 (1972).

[2] Product Liability Directive (Council Directive 85/374/EEC), amended by Directive (EU) 2020/1828.

[3] EU AI Act (Regulation (EU) 2024/1689). Official Journal of the European Union, 2024.

 

[4] Estate of McCall v. United States, No. 1:18-cv-00477 (D.D.C. 2019).

[5] Thaler v. Perlmutter, 945 F.3d 734 (4th Cir. 2020), affirmed by U.S. Copyright Office

[6] General Data Protection Regulation (Regulation (EU) 2016/679).

[7] California consumer Privacy Act (Cal. Civ. Code § 1798.100 et seq.).

[8]EEOC v. iCIMS, Inc., No. 3:22-cv-00001 (N.D. Ill. 2022).

[9] NIST Cybersecurity Framework, Version 2.0 (2024).

[10] IEEE Ethically Aligned Design (2019).

[11] Olmstead v. Tesla, Inc., No. 2022-CA-001592 (Fla. Cir. Ct. 2023).

[12]D’Angelo v. IBM, No. 1:20-cv-00001 (S.D.N.Y. 2021).

[13]Silverman v. openAI, No. 3:23-cv-03416 (N.D. Cal. 2023).

[14] Algorithmic Accountability Act of 2022, H.R. 6580 (118th Cong.).

[15] Artificial Intelligence Video Analysis Act, 740 ILCS 14/ (2023).

[16] OECD AI Principles (2019).

[17] Brookings Institution Report on AI Liability (2023). Available at: https://www.brookings.edu/research/ai-liability-and-regulation/  accessed on 15th December of 2025.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top