Published On: April 20, 2026
Authored By: Kshitija Kiran Burbure
DES Shri Navalmal Firodia Law College
Abstract
The rise of generative artificial intelligence, particularly text-generating systems, has presented defamation law with significant new challenges. Unlike human beings, AI systems cannot be held legally liable for harmful or false statements they produce. The question of who should bear responsibility when AI-generated content harms a person’s reputation therefore demands urgent legal attention. This article analyses the potential liability of developers who create AI systems, platform providers who deploy them, and users who generate and disseminate AI-produced content. It examines how existing defamation law may be interpreted to apply to AI-generated speech and proposes regulatory recommendations for establishing a coherent liability framework, one that protects individuals while facilitating the responsible development of AI technology.[1]
I. Introduction
Generative AI applications such as ChatGPT, Google Gemini, and Meta AI have dramatically transformed the ways in which we create, share, and consume information. These systems rely on large datasets and machine learning algorithms to generate responses based on user-provided prompts. While such technologies offer considerable benefits, they are not always accurate. One of the principal challenges posed by generative AI is the production of inaccurate or fabricated information, a phenomenon commonly referred to as “AI hallucinations.”[2]
AI hallucinations occur when a generative AI system produces outputs that are factually incorrect, internally inconsistent, or entirely fabricated, based on patterns detected in its training data rather than on verified information.[3] When such hallucinated content inaccurately involves a real person, it may damage that person’s reputation and give rise to questions of defamatory liability. Traditional defamation law was developed to address statements made by human beings through print or digital publications. No clear legal definition of liability has yet been established for defamatory content generated by AI systems. This article addresses the central question: who is legally liable for a defamatory statement generated by an AI system?
II. Defamation in Traditional Legal Frameworks
Defamation refers to the making of a false statement that damages a person’s reputation. The law recognises two forms: libel, which concerns published or written statements, and slander, which concerns spoken statements. To succeed in a defamation claim, a plaintiff must generally establish four elements: that a false statement was made; that the statement was published or communicated to a third party; that the defendant was negligent or acted with fault; and that the statement caused damage to the plaintiff’s reputation.[4]
In India, defamation is addressed both as a criminal offence under Section 356 of the Bharatiya Nyaya Sanhita, 2023,[5] and as a civil wrong under the law of torts. Traditionally, the law was developed to govern defamatory statements made by human beings, premised on the understanding that speech is an expression of conscious thought and intent.
AI presents a fundamental challenge to this framework. AI systems generate content automatically by recognising and reproducing patterns present in their training data. Unlike a human being, an algorithm possesses no self-awareness, no capacity for judgment, and no intention to cause harm. Because AI systems cannot bear legal liability, courts must look beyond the system itself to determine who among its developers, deployers, or users should be held responsible for the harm caused by its outputs.
III. AI-Generated Speech and the Risk of Defamation
AI systems process large volumes of sourced text and learn language structure through training on that data. When a user submits a prompt, the system does not search a database for pre-existing answers; instead, it constructs a response by predicting the most statistically probable sequence of words based on its training. This process can produce hallucinations, content that is plausible in form but false in substance.
When hallucinated content inaccurately involves a real person, the risk of defamation arises. An AI system may generate false information about an individual that, even without any intent to deceive, could negatively affect that person’s reputation if the content is read or shared by others. A recent illustration of this risk is the lawsuit filed by conservative activist Robby Starbuck against Meta, in which the plaintiff alleged that a Meta AI chatbot falsely associated him with criminal conduct.[6] Cases of this kind demonstrate that existing legal mechanisms are not yet fully equipped to address the harms presented by AI-generated speech.
IV. Liability Challenges in AI-Generated Defamation
Determining where liability lies for defamatory AI-generated content presents courts with challenges that differ fundamentally from those in traditional defamation cases. Liability may potentially rest with the developer who designed and trained the AI system; with the platform provider who makes the system available to users; or with the user who prompted the system to generate the content and subsequently shared it. The involvement of multiple actors at different stages of creation and dissemination significantly complicates the attribution of responsibility.
1. Liability of Developers: There is considerable discussion in legal circles about holding AI developers liable for defamatory outputs produced by their systems. Developers design the AI, structure its behaviour, determine what data is used to train it, and implement controls intended to reduce the risk of harmful outputs. Where a defamatory statement results from a flaw in the system’s design, inadequate training data, or insufficient safeguards, it may be reasonable to hold the developer liable under a negligence standard, provided that the failure to prevent such output was clearly unreasonable in the circumstances.
However, practical and philosophical objections arise. Imposing strict liability on developers would place an exceptionally onerous burden on technology organisations, effectively requiring them to guarantee that no user of their system will ever produce defamatory content, even in circumstances entirely beyond the developer’s control. Developers do, however, hold a structural advantage in terms of foreseeability of harm, given their unique understanding of the system’s capabilities and limitations.
2. Liability of Platform Providers: Platform providers, being organisations that deploy, operate, and maintain AI systems, also face significant liability exposure in AI defamation cases. Under Section 79 of the Information Technology Act, 2000,[7] intermediaries in India enjoy a safe harbour from liability for unlawful third-party content, provided they comply with due diligence requirements and remove such content upon receiving proper notice. However, this provision does not clearly address situations in which a platform is actively involved in creating or amplifying content through its own AI systems. Where an AI system generates defamatory content, the platform can no longer be characterised as a passive host. Determining the appropriate level of control, supervision, and responsibility that a platform should bear over AI-generated outputs remains a significant and unresolved legal question.
3. Liability of Users: Users who employ AI systems to generate speech may also bear liability for defamatory content. Although AI tools generate responses based on user prompts, it is the human user who formulates the prompt and determines whether to publish or share the output. A user who intentionally prompts an AI to generate a false or misleading statement about a person, or who shares AI-generated content without adequately verifying its accuracy, may in such cases bear liability for the resulting harm.
Legal scholars have consistently held that conventional defamation doctrines continue to apply to human intermediaries who republish or assist in spreading defamatory content, regardless of where the original content originated.[8] This principle was recently reinforced when the Kerala High Court cautioned editors against republishing defamatory posts, holding that prior public availability of material provides no shield against liability for republication.[9] Under applicable defamation law, individuals may therefore be held liable not only for originating false statements but also for republishing them.
V. Suggestions for Legal Reform
The following reforms are proposed to address the current gaps in the legal framework governing AI-generated defamation:
1. Primary Liability of Users for AI-Generated Defamation: Users should bear primary liability for defamatory statements generated by AI systems at their direction, and for the dissemination of such statements. Since the user submits the prompt and makes the decision to publish or share the output, the user is the final decision-maker with respect to the information generated. Disseminating false material produced by AI should therefore attract the same legal consequences as disseminating false material through conventional means.
2. Duty of Users to Verify AI-Generated Content: Users should be required to perform a reasonable degree of verification of AI-generated information before relying on it for public communications, professional purposes, or social media publication. Given the well-documented risk of AI hallucinations, users of generative AI systems must engage in basic fact-checking and cross-reference AI-generated content with credible sources before sharing it. Users who fail to perform reasonable verification may be found to have acted negligently.
3. Duty of Reasonable Care for AI Platform Providers: Organisations that provide AI platforms should be subject to a duty of reasonable care in the design, development, and maintenance of their systems. This duty should include implementing safeguards against the generation of defamatory content, monitoring for patterns of harmful output, and responding promptly to complaints regarding the dissemination of false information.
4. Reporting and Remedy Procedures: AI platforms should establish clear and accessible procedures through which individuals who have been harmed by AI-generated defamatory content may report the incident. Such procedures should also afford the affected party the right to request corrections, removal of the defamatory content, or clarification regarding incorrectly generated outputs.
5. Mandatory Disclaimers for AI-Generated Content: Any platform featuring AI-generated content should display a prominent disclaimer informing users that outputs may contain errors or entirely false information. Disclaimers should encourage users to exercise caution and not rely on AI-generated content without independent verification.
VI. Conclusion
Artificial intelligence is reshaping how information is created and disseminated; however, it also introduces significant legal challenges when AI-generated content causes reputational harm to individuals. Since AI systems cannot be held legally liable for their outputs, responsibility must rest with the human actors who create, deploy, and use them.
Users who rely on AI-generated content must proceed with care and may be held liable for disseminating false and harmful material. Platform developers and providers must in turn take reasonable measures to design systems that minimise the likelihood of harmful outputs. A balanced approach that distributes liability appropriately among users, developers, platform providers, and regulators is necessary to ensure both adequate protection for individuals and a legal environment that does not stifle responsible innovation in AI technology.
References
[1] For a general overview of defamation principles applicable to digital content, see Manupatra Academy, Law of Torts: Chapter 19 — Defamation (student.manupatra.com) <http://student.manupatra.com/Academic/Abk/Law-of-Torts/Chapter19.htm> accessed April 2026.
[2] Google Cloud, What Are AI Hallucinations? (Google Cloud) <https://cloud.google.com/discover/what-are-ai-hallucinations> accessed April 2026.
[3] S Gregory, AI Hallucinations Can Pose a Risk to Your Cybersecurity (IBM Think) <https://www.ibm.com/think/topics/ai-hallucinations> accessed April 2026.
[4] Manupatra Academy, Law of Torts: Chapter 19 — Defamation (student.manupatra.com) <http://student.manupatra.com/Academic/Abk/Law-of-Torts/Chapter19.htm> accessed April 2026.
[5] The Bharatiya Nyaya Sanhita, No. 45 of 2023, § 356, INDIA CODE (2023).
[6] S Parvini, ‘Conservative Activist Robby Starbuck Sues Meta over AI Responses About Him’ (AP News, 1 May 2025) <https://apnews.com/article/robby-starbuck-meta-ai-delaware-eb587d274fdc18681c51108ade54b095> accessed April 2026. [Author to verify current status of proceedings.]
[7] The Information Technology Act, No. 21 of 2000, § 79, INDIA CODE (2000).
[8] Manupatra Academy, Law of Torts: Chapter 19 — Defamation (student.manupatra.com) <http://student.manupatra.com/Academic/Abk/Law-of-Torts/Chapter19.htm> accessed April 2026.
[9] S Panwar, ‘Editors Beware, Kerala High Court Warns Against Republishing “Derogatory” Posts’ The Indian Express (New Delhi, 10 March 2026). [Author to add case name and citation once judgment is reported.]



