Published On: September 30th 2025
Authored By: Prostuti Dutta
University Law College
Abstract
This article will dwell into the legal liability and accountability in case where AI commits a mistake. The need of proper regulatory laws all over the world is a major highlight of this article which will provide a proper standard for the responses of an AI.
Introduction
Artificial Intelligence or simply AI has become an integral part of our day to day life nowadays. Whether it be a simple homework of a fifth grader student to performing complex calculations, AI can do it all. Due to its versatility, AI has become an irreplaceable part of a major group of people in the 21st century society. In general terms it has made a lot easier and maybe that is what it was intended to do so in first place. Among all the people using AI, it is observed that millenials and generation z uses AI the most in their lives whether for coding or for homework, proofreading or for performing calculations. It was even observed that in a workplace, a person using AI is able to complete their tasks faster than a person who does not use AI. This makes a person more efficient and they are able to perform more tasks per hour. Gradually it is observed that people have started to rely on AI a lot more than usual and just like how there are two sides of a coin, AI has both positives and negatives.
Anyone who has used AI services whether it be ChatGPT, Gemini AI, Perplexity AI, etc knows that any information retrieved from such AI must be rechecked with correct information from trusted sources. This clearly implies that however widely used AI be, it cannot be relied fully. Popular AI models such as ChatGPT even states under its responses that it may commit mistakes and user must be careful. This poses a significant question in everyone’s mind that if AI commits a mistake, who is to be blamed? Is it the developer who will be blamed for the responses of the AI? Or is it the user who is providing the prompts would be liable? Will it be the company who owns the AI model will be liable? Or will the Government will be liable for not regulating the responses of the different AI models in the market?
There are a lot of instances found where AI has committed blunders while performing its functions. In New York federal court filing, a lawyer was found to be using ChatGPT to cite non existing case laws. [1]While a customer service chatbot used in Chevrolet was exploited to agree to all the requests of a user and agreed to sell a new Chevrolet car for a dollar with a legally binding order. While another chatbot used in a healthcare association was removed because it repeatedly recommended calorie deficit diets to a person suffering from eating disorder. These examples are just the tip of the iceberg and this list keeps on going. So who is to be blamed in these cases? Will the AI be blamed for citing fake case laws? Or the user should be blamed for exploiting the loophole in the chatbot of Chevrolet? Or is the fault of the developer of the chatbot that recommended such harmful diets to a patient and making their conditions worse? The answers to all these questions would lead to a rabbit hole of new explorations as the AI is non sentient. Most of the AI models train themselves from the responses and prompts provided by the users. But this does not give a proper answer to the question of who is accountable to AI’s mistakes as the developers are also the people who monitor how the AIs are working and perform necessary actions to the AI when any adversity is observed. Also can the government be also made liable since they have not made any policy regarding the regulation AI? It all circles back to the same question of who should be blamed when AI commits a mistake.
A mother from Florida has filed a case against CharacterAI alleging that her teen son was drawn into this platform and formed a sexual an abusive relationship with a chatbot which lead him to end his life.[2] She has stated that her son died by a self inflicting gunshot wound after his last conversation with this AI. This is a sensitive issue that has caused a huge uproar among all the users and community which uses AI. It was found that the teen was isolated from reality and has formed a deep bond with the AI over the time which lead to a deep dependence of the teen with the AI. But again here the question arises again about who must be blamed for this? Should the developers of CharacterAI be held accountable? Or the mother must be blamed for her negligent behavior of checking the wellbeing of her son? Or Google must be blamed for having a role in developing this AI? Or the chatbot itself must be blamed? Well this is for the court to decide but this issue opens a huge question and a panel for discussion whether an AI can be blamed for its actions or whether the AI should have the autonomy to do whatever it wants and have the freedom of speech. Since AI is non sentient we cannot say that it will have any rights like a normal human being, but even though not being a sentient structure it needs to follow certain restrictions in its actions ad its formulation has not still not been made.
Legal Liability
A legal liability arises when a legal right of someone has been infringed by the actions or inactions of another. [3] Legal liability ascertains who is to be blamed when rights of a person has been infringed and holds them accountable to make the loss of the person who suffered the loss good. In legal world, liability can be categorized into criminal and civil liability.
Criminal liability arise when a person commits a crime against another and this leads to fine and imprisonment of the defendant whereas in civil liability the person’s action or inaction leads to damage of another person and leads to compensation to the person who has suffered loss. Both of this liability held a party accountable for their actions.
In context of AI the liability can be both civil and criminal in nature. AI giving out false citations could be an example of fraud and AI pushing a teen to death could be an example of criminal liability. But is AI liable in reality? And even if AI is held accountable who will be the one who will compensate the loss suffered by someone due to its errors or serve punishment in regard to the criminal activities it commits? And what if the companies who are in charge of these AIs states that the AI is at fault but in reality they were the ones who actually committed the error in developing their programs? This is still a grey area which leaves a huge opportunity for people to exploit others due to a lack of regulation.
Regulations So Far
In April 2021 the first ever AI Act was proposed by European Commission establishing a risk-based AI classification system.[4] This was aimed to provide the citizens for a safe, transparent, traceable and non discriminatory AI system. This banned certain AI applications in Europe such as cognitive behavioural manipulation of people, classifying people on the basis of socio economic status, real-time biometric identification system and facial recognition. The transparency concerns in AI has provided a safeguard to the intellectual property as now under this regulation the generative AI have to disclose the source from which they are taken from. This regulation also aims to support AI innovation and startups in Europe making this regulation a comprehensive regulation for AI. But this Act has certain limitations as well. The first and foremost limitation would be the exception this Act has provided for AI systems developed for national security which means government can use national security to introduce the prohibited systems under the Act. The Act has also failed to address the issues relating to measures for protection of fundamental rights and not sufficient to prevent abuse.[5]
Furthermore Denmark is set to change its copyright law to by inculcating copyright to people of their own body. This gives the right to every person upon their own body, facial features and voice.[6] This is done to tackle the problem of deepfakes or AI generated images. This is a huge step taken by the Danish government to prevent the abuse and take down any picture which is shared on the internet without the owner’s consent.
India
There is no explicit legislation is made for the regulation of AI in India but the Digital Personal Data Protection Act 2023 (DPDP Act 2023) which still has not come into force yet, will have some impact on the data that is being processed by the AI systems. [7] The AI systems will have to comply with certain rules and regulations on how personal data of people of India will be used. This Act has outlined the duties of “data fiduciaries” which are entities that collect, store and process personal data. This Act also like the EU AI Act exempts actions of government that pertains to national security, public order and investigation of crimes.[8]
But India still lacks the regulation to safeguard its citizens from the abuse that the tech giants can do on the lives of people. Deepfake is a major issue in India nowadays where mostly female population is vulnerable to such AI generated images which are made to harm their reputation. Recently a case has arise Assam where an ex- boyfriend out of rage made an account filled with AI generated pornographic content of his ex-girlfriend. This is a serious concern as it has lead to injury of the reputation of the girl moreover causing her the mental trauma.
Conclusion
Discussion of whether AI is a boon or a bane is quite obsolete in nature as AI has penetrated deep into our lives by now that it is quite difficult to live without it. AI is a part of all the major sectors of service in and around us and has quite efficiently reduced the workload and made the work to complete faster. But the concerns whether AI is liable when some error is committed by it still remains as a question. With the rise use of AI the matters of privacy concerns also comes up and poses a risk as to how the data of a user is being used. Indian government should step into the regulation of AI in India to safeguard the rights of its citizens and prevent exploitation by tech giants. India has seen a huge issue with the rise of deepfakes in recent times where sometimes features of politicians are used to make derogatory videos or photos. Indian government should bring up changes in its copyright laws as well to protect the rights of people over their bodies.
References
[1] “When AI Goes Wrong: 10 Examples of AI Mistakes and Failures” <https://share.google/5HvdVY9ppQhsi0BCt> accessed August 11, 2025
[2] Yang A, “Lawsuit Claims Character.AI Is Responsible for Teen’s Suicide” NBC News (October 23, 2024) <https://share.google/EDn7IWMVAmuHYnKIP> accessed August 11, 2025
[3] LII, “Liability” (LII / Legal Information Institute) <https://share.google/5AyFUW6rQTiMxE22s> accessed August 11, 2025
[4] “EU AI Act: First Regulation on Artificial Intelligence” (Topics | European Parliament) <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> accessed August 11, 2025
[5] “Article 2: Scope” (Securiti, July 11, 2024) <https://share.google/FYrJRMHYnlIL3KNtV> accessed August 11, 2025
[6] Bryant M, “Denmark to Tackle Deepfakes by Giving People Copyright to Their Own Features” The Guardian (June 27, 2025) <https://share.google/BK7oMyEmO1HhYFkAT> accessed August 11, 2025
[7] Jan, “Data Privacy Considerations Surrounding AI Use in India” (Law.asia, April 2, 2025) <https://share.google/UiS84tOvFCCNSbsxU> accessed August 11, 2025
[8] “The Digital Personal Data Protection Bill, 2023” (PRS Legislative Research) <https://share.google/h2hAunrV2VF7IJWmP> accessed August 11, 2025