Published On: September 04th 2025
Authored By: Susmita Chatterjee
Kolkata Police Law Institute (Calcutta University)
INTRODUCTION
Artificial intelligence refers to a computer system that is capable of performing complex tasks. not only that, the purpose of the regulation is to improve the functioning of the internet market by laying down a uniform legal framework in particular for the development, marketing and also Industries from healthcare and finance to transportation and entertainment are being transformed by the rapid advancement of AI technology. In this, artificial intelligence includes natural language processing tools, machine learning algorithms, with that, autonomous robots, which are increasingly being used to make several perform tasks and decisions that were once the sole domain of humans. Although AI promises immense benefits, particularly in the realm of legal liability, significant challenges were introduced by it. The main question is that who is legally responsible if something goes wrong, which mean harm to people, society or any kind of property damage which become more complex because AI system is become increasingly autonomous and also integral to critical decision making. In an AI-driven world, this article talks about, the intersection of AI and legal liability, not only that, reviews existing legal frameworks, and also discusses the potential solutions for ensuring accountability for AI.[1]
THE RISE OF AI AND THE CHALLENGES OF LIABILITY
AI systems have the ability to perform tasks independently, frequently needing little or no assistance from humans. These systems learn from large amounts of data, adapt to new situations as well as develop rapidly over time. Nevertheless, this very independence makes it difficult to distribute legal responsibility when things turn wrong. Traditional legal frameworks, which are well balanced in human actions, sometimes its struggle to fit in some unique characteristics of AI, such as its extent for self – improvement and unpredictability. Regard the case of independent vehicles. self – driving cars are supplied with advanced AI systems that makes decisions about steering, braking, and navigation. In case an independent vehicle is involved in an accident, it can be difficult to decide who should be held answerable: the manufacturer, the software developer, the car owner, or some other party. This matter is made up by owner, or some other party. This matter is made up by the fact that Al systems are often” trained ” on large data sets, and their behaviours can change gradually over time based on the data they receive. As such, AI complicates traditional idea of fault, negligence, and causation in legal settings.
The question of answerability in Al- related incidents become even more urgent when we consider high- risk applications like healthcare (e.g., AI powered diagnostic tools), criminal justice and defences. In these contexts, mistakes made by AI can have severe consequences, raising concerns about how the legal systems should respond.
TYPES OF LEGAL LIABILITY IN AI
AI systems introduce new problems when it comes to determining legal liability. Traditional categories of legal responsibility, like negligence, product liability and strict liability must be adapted or specifically to account for AI’s unique attributes.
PRODUCT LIABILITY
Product liability refers to the responsibility of sellers and manufacturers to make sure that their products are safe for use. On the subject of AI, the questions become: who is responsible when an AI system planted in a product causes harm? AI – powdered products, like self- driving cars, medical devices, and home robotics, introduce new risks that are not present in traditional consumer products. AI systems can “learn” from them environment which defines that their behaviour may develop gradually in unexpected ways. For example, an autonomous vehicle is involved in an accident which might raise the question of whether the manufacturer of the vehicle, the developer of the software or the software or the operator is at fault. Was the car’s Al algorithm designed improperly or did it make an unexpected decision in response to a complex situation? current product liability laws may not adequately address these problems as they are typically based on the laws may not satisfactorily address these complexities, as they are typically based on the laws may not adequately address these complexities as they are typically based on the assumption that products are static once sold, whereas Al systems are dynamic and can change their behaviour post sale.
NEGLIGENCE
Negligence occurs when entity fails to exercise a reasonable standard of care or an individual, leading to harm. Here, is the important question being that, whether the AI systems operators or the developers neglected to take reasonable safety measure in deployment, testing, design or oversight. For example, in diagnosing medical conditions if make wrong diagnosis that cause patient’s harm for using AI system then, the question would be that, the healthcare provider or the developer of the AI system failed to sufficient test or monitor the system. In AI contexts negligence may be include:
- Failing to test the system thoroughly before deployment,
- Weaknesses or ignoring known risks in the system,
- Incomplete data to train the AI or using biases
 in a case involving an AI-powered hiring algorithm that discriminates based on race or gender, the developer or the company that implemented the system may be found negligent for failing to address or prevent bias in the system’s decision-making.
 STRICT LIABILITY
Strict liability holds parties liable for harm regardless of fault or negligence. In the context of AI, strict liability may apply in cases where AI systems are deployed in high-risk settings, such as autonomous vehicles, medical devices, or weapons. In these cases, the risks associated with the deployment of AI may be so huge that liability is imposed upon simply because harm occurred, even if the harm was indirectly caused by negligence.
for example, if an autonomous vehicle causes the death of a pedestrian, the vehicle’s manufacturer as well as the developer of its AI system could be held fully responsible, even if there was no negligence in its design or operation. This approach could encourage the manufacturers to ensure their vehicles to maintain the highest level of safety and risk management. Â
VICARIOUS LIABILILY
 Vicarious liability refers to holding someone responsible for another person’s actions due to their relationship, such as a boss being responsible for the actions of their employee. When it comes to AI, legal responsibility may arise for the company or organization if their AI systems make any decisions that negatively affect to the employees, customers, or the public.
If an AI system makes a harmful or unfair decision such as in hiring, firing, or credit scoring then the company that deployed the AI could be held vicarious liability for the damages that happened.
For example, if a company unintentionally discriminates against certain groups in hiring due to flawed data used by an AI system, then the company could be held liable for the action of discrimination, even if the AI didn’t intend to treat anyone unfairly. This type of liability requires companies to make sure that they use AI system in a more careful and ethical way.
CRIMINAL LIABILITY
In criminal liability it is hard to decide that who is responsible for crimes when AI does something wrong. AI systems can’t be punished like humans but the organizations or individuals may be held criminally liable if they use or operate AI systems to commit a crime or if the AI itself engages in lawful actions.
For example, if an AI system is used to commit fraud such as steal money or cybercrimes, the people who set up or used the AI could be held responsible for these actions. Where if AI system is used to spread hate, crime, terrorism or other illegal actives, then the legal system must be determined whether the AI system should be treated as responsible for the crime, or the responsibility lies entirely with its developers or users.[2]
ADDRESSING LEGAL LIABILITY THOUGH REGULATION AND LEGISLATION
As AI systems rapidly develop, lawmakers and regulatory authorities are working to determine how legal responsibility should be addressed in relation to AI. Recently, various jurisdictions were exploring new laws as well as updating existing ones to face the unique challenges which is arise from AI system.
THE EUROPEAN UNION’S AI ACT
In April 2021, the AI Act introduced by the European Commission, the act main aims to ensure AI is used in a safe, fair manner and transparently. The Act classifies AI systems according to their risk level, ranging from minimal to high and establishes more stringent regulations for high-risk AI applications. The AI Act also sets clear guidelines regarding legal liability, including provisions for compensating victims, to ensure that those individuals harmed by AI system receive fair treatment and justice.
Under this AI Act, mention that the developer who run high- risk AI systems will have to follow certain rules:
- Conduct risk assessments
- To prevent from harming put safety measures.
- Ensure that AI decision-making processes are transparent and that the reasoning behind the decisions can be clearly validated.
It is important to mention that the AI Act aim to ensure organization for holding entities accountable when an AI systems cause damage. This Act provides the potential for civil liability to the victims of AI-related harm, allowing them to seek compensation from manufacturers or developers.
INTERNATIONAL COLLABORATION AND THE NEED FOR HARMONIZED STANDERDS
Given the global nature of AI development and deployment, international cooperation will be crucial in addressing the legal challenges associated with AI liability. Such organizations included the OECD and the United Nations are working on the guidelines and recommendations for AI governance, which is also including frameworks for liability and accountability. A harmonized approach to AI regulation could help ensure that AI systems are developed and deployed safely, while providing clear pathways for addressing legal liability.[3]
EMERGING SOLUTIONS FOR AI LIABILITY
As the regulatory landscape continues to evolve, several solutions are emerging to address the challenges of AI liability:
Auditing and Certification: Regular audits and certification processes for AI systems could help ensure that AI technologies meet safety, ethical, and legal standards before they are deployed. Third-party audits of AI systems could verify that they are functioning as intended and are free from harmful biases.
AI Personhood: Some legal scholars have proposed granting AI systems a form of “electronic personhood,” giving them certain rights and responsibilities under the law. While this idea remains controversial, it highlights the need for legal frameworks that can address the growing complexity of AI and its potential to impact society in profound ways.[4]
CONCLUSION
The rise of artificial intelligence presents significant challenges for the legal system, particularly in the area of liability. As AI systems become more integrated into society, the traditional legal frameworks that govern human actions and responsibility will need to adapt to account for the unique characteristics of AI, including its autonomy, self-learning capabilities, and unpredictability. Legal frameworks must be updated to ensure accountability, fairness, and transparency in AI-related incidents, while also promoting innovation and progress. Through effective regulation, insurance models, and ethical guidelines, society can navigate the complexities of AI liability, ensuring that AI technologies are used responsibly and safely for the benefit of all.
REFERENCES
[1] European Commission (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act).
(https://ec.europa.eu), (European Commission, official website – European Commission)
[2] Federal Trade Commission (FTC), (2020). AI and Consumer Protection.
 (https://www.ftc.gov) Federal Trade Commission | Protecting America’s Consumers
[3] OECD (2021). OECD AI Principles. (https://www.oecd.org)
The Organisation for Economic Co-operation and Development | OECD
[4] National Highway Traffic Safety Administration (NHTSA), (2020). Automated Vehicles for Safety.
 (https://www.nhtsa.gov) NHTSA | National Highway Traffic Safety Administration




