Published On: August 28th 2025
Authored By: Rup Sarkar
Sister Nivedita University, New Town, West Bengal
ABSTRACT
The increasing acceptance of artificial intelligence (AI) in society has altered the way decisions are made in critical sectors such as healthcare, law enforcement, finance, and transportation. Artificial intelligence (AI) systems, particularly those with the ability to make decisions on their own, are now capable of handling tasks that were previously solely within the realm of human judgment. This development has made legal liability and accountability urgent concerns. It is challenging to assign blame when an AI-driven decision results in harm because traditional legal frameworks are based on human intent, causation, and foreseeability. This paper examines the fundamental challenges of applying current legal doctrines to artificial intelligence, examines relevant provisions, assesses case law and legislative advancements, and offers particular suggestions for a liability framework that is ready for the future.
OBJECTIVE
The primary objectives of this article are :
- To research the legal concerns raised by the increasing autonomy of AI systems.
- To ascertain whether existing legal theories—like negligence and product liability—apply to artificial intelligence.
- To identify flaws in statutory frameworks and legal reasoning.
- To discuss case law and international initiatives related to AI liability.
- To offer suggestions and changes that will ensure accountability in the use of AI without stifling creativity.
ANALYSIS
1. AI’s Character and Legal Consequences
The basis of artificial intelligence, especially machine learning and deep learning systems, is the analysis of large amounts of data and the identification of patterns. It is harder to place blame on AI systems than on traditional software because they are not totally predictable and can make choices that are not predetermined by their original programming.
Key issues consist of:
- Opacity: Many artificial intelligence models operate as opaque “black boxes” with opaque decision-making procedures.
- Autonomy: It is more challenging to spot instances of human intervention when AI systems are able to act independently.
- Data Dependency: Decisions that are discriminatory or harmful may be the consequence of inaccurate or biased input data.
- Distributed Development: Developers, data suppliers, hardware suppliers, and end users can all contribute to the final product.
2. Limitations of Legal Doctrines
- Traditional liability regimes, such as tort law, vicarious liability, and product liability, are predicated on human agency, direct causation, and foreseeability.
- When AI’s creator could not have predicted its behaviour, it is challenging to apply negligence.
- AI’s self-evolving behaviour might not be covered by product liability since it presumes manufacturing or design flaws.
- Vicarious Liability: Treating AI as a human or corporate agent becomes problematic when it exhibits behaviours that deviate from its intended functionality.
- Most of the time, the dynamic, non-linear nature of AI decisions cannot be adequately accounted for by current regulations.
KEY PROVISIONS
Several countries and their regions have started to create legal frameworks to handle responsibilities related to artificial intelligence. To define AI-based liability in various nations, there are a number of draft documents and proposals.
European Union – AI Act
- 14: High-risk AI systems need human oversight.
- 65–70: Penal provisions and compliance mechanisms.
Risk Categories: Assigns obligations to AI that falls into the unacceptable, high-risk, and low-risk categories.
European Parliament’s Resolution
- High-level artificial intelligence entities with the ability to make decisions on their own are proposed to have “electronic personhood.”
- The establishment of liability insurance for AI system operators is advised.
India – Draft Digital India Act
Proposes ethical governance over algorithms and penalties for algorithmic bias.
As of right now, neither criminal nor tort law explicitly addresses AI-specific liability .
Sector-Specific U.S. Guidelines
NHTSA: Provides AV guidance but lacks legally binding liability regulations.
There are no federal laws pertaining to AI liability as of 2025.
CASES
- Uber self-driving fatalities in 2018 occurred in Arizona, USA
Facts: An Uber self-driving car struck a pedestrian, killing them.
Issue: The self-driving vehicle AI did not detect the pedestrian because the safety driver failed to monitor the road.
Outcome: The company faced civil suits, but criminal liability was not imposed.
Implication: Developers and users face challenges when assigning criminal liability for software failures in autonomous systems.
- Loomis v. Wisconsin, 881 N.W.2d 749 (2016)
Facts: COMPAS risk-assessment algorithm influenced the sentencing of the defendant.
Issue: The defendant argued that the algorithm contained bias and lacked transparency.
Outcome: The court decided to maintain the original sentence.
Implication: The legal system faces difficulties in delivering due process when using hidden AI systems for decision-making.
CONCLUSION
AI brings with it new legal issues as well as revolutionary possibilities. Although independent decision-making processes increase operational accuracy and efficiency, they also compromise the traditional liability law concepts of intent, causation, and foreseeability. The legal system has not kept up with technological advancements, resulting in a patchwork of regulations and unclear legal boundaries.
Due to the lack of specific legal regulations, developers and victims of AI-caused harm are at risk. AI developers are at risk of excessive litigation, and victims of AI-generated harm have a hard time getting fair compensation. Maintaining innovation while establishing accountability requires an active legal system that balances civil responsibility, regulatory standards, and ethical principles.
SUGGESTIONS
- AI-specific clauses that acknowledge shared accountability and autonomous decision-making must be incorporated into national laws.
- In order to guarantee victim compensation, AI systems, particularly those that pose a high risk, should be required to carry liability insurance, much like auto insurance.
- Explainable AI should be required by law, especially for systems that relate to public safety, medical care, or legal rights.
- To maintain accountability and control, include “human-in-the-loop” or “human-on-the-loop” designs in all crucial AI systems.
- Create specialized benches or tribunals for disputes pertaining to AI in order to develop jurisprudence and effectively address technical issues.
- Recognizing that AI transcends national boundaries, create transnational frameworks for AI ethics and liability that are comparable to GDPR.
REFERENCES
- Wisconsin, Loomis v., 881 N.W.2d 749 (Wis. 2016).
- The proposed EU AI Act