AI AND LEGAL LIABILITY: CHALLENGES OF ACCOUNTABILITY IN AUTONOMOUS DECISION-MAKING

Published On: October 4th 2025

Authored By: BOMMU LAKSHMI BOLINI
Gitam Deemed to be University

ABSTRACT

AI is revolutionizing several sectors, including driverless cars, healthcare, banking, and law enforcement. This has sparked debate regarding legal responsibility, particularly in cases where AI systems make harmful decisions on their own. Traditional legal theories like mens rea, human agency, and foreseeability are insufficient to handle the intricate accountability problem of autonomous systems. As AI develops its independence, determining accountability becomes a complex legal issue. The shortcomings of existing liability frameworks are examined, suggested legal models are evaluated, and international approaches to AI regulation are examined, especially from the perspective of the developing Indian legal debate. These systems have great potential to improve and transform several industries, from healthcare to autonomous vehicles traversing challenging urban settings.

INTRODUCTION

As AI has developed, rule-based systems have given way to complex machine learning (ML) models that can learn from data and improve themselves. Medical diagnostic tools, self-driving automobiles, and predictive policing algorithms are examples of autonomous decision-making systems that function with little assistance from humans.  The decision-making procedures provide obstacles to current legal frameworks, even though their implementation can increase efficiency and decrease human error.

 According to the law, identifiable human actors, such as people, businesses, or governmental bodies, are liable.  On the other hand, AI systems frequently operate as “black boxes,” generating results devoid of straightforward logic.  Assigning blame and imposing legal accountability are made more difficult by this obscurity. The widespread use of AI systems raises complicated legal questions, such as jurisdiction and cross-border legal harmonization.  Governmental organizations and international enterprises may face difficulties as a result.  To create a uniform regulatory framework, these problems must be resolved, and international harmonization of AI regulations must be promoted.  The study examines legal duty and accountability in AI decision-making to assist legislators, developers, and consumers in navigating the intricate legal landscape of artificial intelligence and utilizing these technologies responsibly.

NATURE OF AUTONOMOUS DECISION-MAKING 

Three types of AI-based decision-making can be distinguished:

  1. Assisted Decision-Making: AI offers insights and suggestions to assist human decision-making (e.g., AI legal research tools). Humans are still solely responsible.
  2. Augmented Decision Making: AI and humans work together to share accountability for results (e.g., AI-assisted surgery).
  3. Completely Autonomous Decision-Making: AI makes choices without human input (e.g., driverless autonomous vehicles).

 The level of autonomy directly impacts the distribution of liability.  Because there is little human control in fully autonomous systems, it might be difficult to assign blame when harm happens.

IN TRADITIONAL FRAMEWORKS, LEGAL LIABILITY

 Liability may be evaluated on several headings under current legal doctrines:

  1. Negligence: To prove negligence, one must provide evidence of a duty of care, a breach, a causal relationship, and damages. But it becomes difficult to prove a breach when computers make judgments without direct human oversight.
  2. Strict Liability: This can be relevant for risky actions, such as operating large machinery. According to some academics, fully autonomous AI needs to be handled like intrinsically very harmful instruments.
  3. Product Liability: Manufacturers and developers may be held accountable if AI products are flawed. The dynamic and changing nature of AI.
  4. Vicarious Liability: Just as employers or controllers of AI systems may be held accountable for employee behavior, they may also be held accountable for harm produced by AI.

DIFFICULTIES IN ATTAINING ACCOUNTABILITY

  1. The transparency and the Problem of the Black Box: Many AI models—deep learning systems in particular—function without offering clear justification for their choices.  For courts, determining whether the decision was irrational is challenging due to this lack of explainability.
  2. The concept of distributed responsibility: Software developers, data suppliers, system integrators, and end users are frequently involved in the development, training, and implementation of AI systems.  Finding out how much of these actors are liable is difficult.
  3. Changing Patterns: Because AI systems are constantly learning, they are able to modify their decision-making processes over time, unlike these machines.  Liability attribution may become more difficult in situations when the AI exhibits unpredictable behavior as a result of its growth.
  4. Concerns about Jurisdiction: Since AI systems frequently work internationally, it might be very difficult to determine which laws to apply in a given situation in such cases.

Global Strategies for AI Liability

A number of governments are investigating legal systems tailored to AI:

European Union:  The most Risk-based regulation is emphasized in the proposed EU Artificial Intelligence Act, which calls for accountability and transparency measures.  Though it is still debatable, the idea of “electronic personality” for AI entities has also been explored by the European Parliament.

The United States is a country that depends on its current tort and product liability rules, which are decided on an individual basis. 

In its National Strategy for AI, NITI Aayog has recognized the need for legal adaptation, however India has not yet put special AI liability rules into effect.

The OECD Guidelines stress accountability, transparency, and responsible AI while urging states to enact regulations.

THE ELECTRONIC PERSONHOOD PROBLEM

Giving AI systems legal personality, akin to that of companies, is one suggested remedy.  With insurance arrangements covering damages, this would enable AI to be sued and held accountable. The critics counter that this could lead to moral risks and absolve human actors of responsibility.

SUGGESTIONS FOR AI LEGAL LIABILITY SOLUTIONS

  1. Models of Hybrid Liability: Fairness might be ensured by combining fault-based liability for low-risk AI applications with strict liability for high-risk applications.
  2. Required Insurance: Liability insurance for autonomous AI system developers and operators can guarantee victim compensation without drawn-out court battles.
  3. Standards for Explainable AI (XAI): Enforcing explainability and openness can make it easier for regulators and judges to comprehend AI decision-making, which will facilitate culpability attribution.
  4. Frameworks for Shared Liability: Users, developers, and deployers of AI might all enter into explicit contracts that would allocate liability according to how much each party contributed to the harm.
  5. Sandboxes for regulations: Prior to broad deployment, AI systems can be tested in controlled settings to detect hazards and improve accountability systems.

RESOLVING ETHICAL AND LEGAL ISSUES

How to Increase AI Transparency-Technical Methods for Improving AI Transparency

It takes both legal initiatives and technology developments to increase AI’s openness.  One interesting technological approach that is being researched is explainable AI .Methods of decision-making for people Accountability in AI decision-making is not purely legal; it also has ethical implications.  Ethical AI design principles, such as fairness, accountability, and transparency (FAT), should be embedded in both technical and legal frameworks.

In order to help consumers comprehend the reasoning behind a particular decision, this entails creating algorithms that can offer clear and understandable explanations for their actions. 

Ways to Ensure Ethical and Legal Compliance

The following suggested practices should be followed by AI developers and organizations to ensure compliance with legal and ethical standards:

  • Stakeholder Engagement: Including a wide range of stakeholders in the development process, such as ethicists, legal professionals, and impacted communities, to guarantee that different viewpoints and issues are taken into consideration.
  • Ethical Impact Assessments: Before implementing AI systems, comprehensive evaluations of their possible ethical ramifications are carried out.
  • Constant Monitoring: Putting in place procedures for constant observation and assessment to guarantee that AI systems continue to abide by laws and moral principles over time.
  • Transparency and Documentation: To promote accountability and transparency, keep thorough records of AI systems architectures and decision-making procedures.

By ensuring that development teams are inclusive and diverse, biases can be lessened and more egalitarian AI systems can be produced.

CONCLUSION

Traditional liability doctrines need to be reconsidered in light of AI’s rapid growth.  Even if current legal frameworks provide some partial answers, they are not well-suited to handle the particular difficulties associated with autonomous decision-making.  To create flexible, victim-centered, and technology-neutral liability regimes, policymakers, technologists, and legal professionals must work together..

The legislation must change as AI continues to infiltrate vital industries to prevent technological advancement from compromising accountability, justice, or fairness.  Building confidence in autonomous decision-making systems requires the establishment of precise, foreseeable, and enforceable regulations for AI liability.

REFERENCES

  1. Priyadarshi Nagda, ‘legal liability and accountability in AI decision making’ (IJIRT International Journal of innovative research in technology Volume -11, Issue 2025) https://ijirt.org/publishedpaper/IJIRT174899_PAPER.pdf accessed 09 August 2025.
  2. European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
  3. Leslie, D. (2019). “Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI accessed 09 August 2025.
  4. National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (NIST AI RMF 1.0) (2023).
  5. NITI Aayog, National Strategy for Artificial Intelligence (2018).
  6. OECD, Recommendation of the Council on Artificial Intelligence (2019).
  7. Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399 (2017).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top