Legal Regulations on Deepfakes: A Comparative Study on The US, UK and Indian Perspectives

Published on: 19th December, 2025

Authored by: Ruplekha Kalita
University Law College, Gauhati University

Abstract

In the modern digital world, deepfakes—realistic but fabricated audio, video, or images produced by artificial intelligence—are becoming increasingly prevalent. While they can be employed creatively in film or education, they present significant ethical and legal challenges. Deepfakes pose serious threats to public trust and individual rights through their misuse in spreading disinformation, impersonating public figures, extortion, and privacy violations.

This research examines how three major legal systems—the United States, the United Kingdom, and India—are responding to the deepfake phenomenon. By comparing these approaches, this study highlights what is working, what is missing, and what can be improved. It argues that India urgently needs a more focused legal response to deepfakes, one that balances free speech with protection against harm. The paper concludes with practical recommendations for building a legal framework that keeps pace with technology while protecting both individuals and democratic values in the digital age.

Introduction

To combat deepfakes effectively, we must first understand what they are. Deepfakes are a form of synthetic media created using artificial intelligence, blending the words “deep learning” and “fake.”[1] They produce voices, pictures, or videos that are fabricated yet appear remarkably realistic. As deepfake creation tools become more accessible, their impact is being felt globally. Deepfakes use machine learning algorithms, particularly Generative Adversarial Networks (GANs), to create or overlay content that realistically resembles real people. They can be employed benignly for satire or entertainment, but they can also be used maliciously to disseminate disinformation, steal identities, harass individuals, or manipulate elections. The dual-use nature of deepfakes creates significant legal complexities regarding intent, impact, and jurisdiction.

Understanding Threats Posed by Deepfakes

Deepfakes rely on neural networks that analyze large datasets to learn how to mimic a person’s facial expressions, mannerisms, voice, and inflections. The process involves feeding footage of two people into a deep learning algorithm to train it to swap faces. In other words, deepfakes use facial mapping technology and AI to swap one person’s face in a video with another person’s face.[2] When misused, deepfakes pose serious threats to privacy and information integrity by spreading misinformation, manipulating public opinion, and violating personal privacy.

Case Studies on Global Legal Regulations

In this section, we evaluate how three countries—the United States, the United Kingdom, and India—are addressing the emerging challenges associated with deepfakes, and consider recommendations for tackling this threat in India.

1. United States

The case of X Corp. v. State of Minnesota (2025)[3] represents a crucial test for American deepfake regulations. In this ongoing case, the social media company X (formerly Twitter) challenged the constitutionality of Minnesota’s deepfake election law, which prohibits the distribution of false information about political candidates created by artificial intelligence within ninety days of an election. X Corp. contended that the statute violated the First Amendment by imposing content-based speech restrictions and that it was preempted by Section 230 of the Communications Decency Act, which shields online platforms from liability for user-generated content.

This case serves as a crucial constitutional test for American deepfake laws. Although Minnesota’s law aimed to prevent election-related disinformation and preserve democratic integrity, X Corp. argued that the statute was overly broad and vague, potentially stifling free speech, satire, and parody.

The lawsuit follows the Supreme Court’s decision in Moody v. NetChoice,[4] which held that platforms’ curation of user content is itself protected speech. That precedent strengthens X Corp.’s argument that Minnesota cannot compel or penalize its content moderation decisions, even in pursuit of election integrity.

2. United Kingdom

In the landmark case R v. Brandon Tyler,[5] heard at Chelmsford Crown Court, twenty-six-year-old Brandon Tyler of Essex received a five-year prison sentence for creating and distributing sexually explicit deepfake images of women he knew. Without their consent, Tyler used artificial intelligence to overlay his victims’ faces onto pornographic content—a troubling form of technological abuse that caused severe emotional harm.

This case is noteworthy not only for the offense but also for the sentence imposed. The court made a landmark decision in UK criminal history by prohibiting Tyler from using generative AI tools in the future, illustrating the judiciary’s growing recognition that advanced technologies can be weaponized for abuse.

At the time of Tyler’s offense, the UK had not yet enacted criminal legislation specifically targeting deepfakes. Nevertheless, prosecutors successfully filed charges under the Protection from Harassment Act 1997 and Section 33 of the Criminal Justice and Courts Act 2015 (which addresses revenge pornography). These laws were creatively applied to address the serious harms caused by AI-manipulated content, despite having been originally designed for more conventional forms of abuse.

Tyler was convicted during a period of evolving legal reform. The Online Safety Act 2024 has recently been passed, and the proposed Criminal Justice Bill 2024 seeks to criminalize the creation and sharing of AI-generated explicit images, even if they are not distributed. Therefore, his case sits at the intersection of existing laws being adapted to address new challenges and new legislation specifically targeting these issues.

Overall, this ruling highlights how the legal system is responding to the practical consequences of synthetic media. As digital manipulation tools become more widely available, the courts are establishing foundations to ensure these tools are difficult to abuse in the future.

3. India

In Gaurav Bhatia v. Naveen Kumar YouTube Channel,[6] the Delhi High Court addressed the reputational harm caused by deepfake content targeting a Senior Advocate and BJP Spokesperson. The plaintiff sought an injunction against several social media users and platforms for circulating manipulated videos falsely depicting him being assaulted in court.

The Court determined that these deepfakes were patently false and could seriously harm the plaintiff’s reputation and dignity. By balancing the freedom of speech under Article 19(1)(a) against the right to reputation under Article 21, the Court concluded that false and defamatory deepfake content is not protected by the Constitution.

The Court issued a landmark order requiring the videos and posts to be removed or restricted, establishing a crucial precedent for combating synthetic media abuse in India. Despite the absence of laws specifically targeting deepfakes, the ruling demonstrates how Indian courts are beginning to develop legal protections against the misuse of AI-generated content.

How the US, UK, and India Are Handling Deepfakes Differently

While deepfakes present a global challenge, countries are responding in markedly different ways. In this section, we examine how the United States, the United Kingdom, and India are tackling the legal challenges created by this rapidly evolving technology.

1. United States: Caught Between Free Speech and Accountability

In the United States, freedom of speech holds uniquely strong constitutional status, which complicates efforts to regulate deepfakes. States like California, Texas, and Minnesota have passed laws addressing issues such as fabricated political videos and non-consensual pornography. However, these laws often face opposition due to First Amendment protections for even harmful or controversial speech, as illustrated in the recent case of X Corp. v. State of Minnesota (2025). This case demonstrates the difficulty of creating deepfake laws in the US that do not conflict with robust speech protections.

2. United Kingdom: Using Existing Laws Creatively While Building New Ones

The UK has adopted a more flexible approach. Even before enacting specific deepfake legislation, its courts found ways to punish harmful uses of AI-generated content by creatively applying existing laws, as seen in R v. Brandon Tyler. In this case, the court sentenced a man to five years in prison for creating and sharing sexual deepfakes of women he knew. The court also prohibited him from using generative AI in the future—an unprecedented decision demonstrating the seriousness with which the judiciary views this issue.

Although the UK did not yet have dedicated deepfake laws, prosecutors used legislation addressing harassment and revenge pornography to secure justice. Now, the UK is introducing stronger tools: the Online Safety Act 2024 and the proposed Criminal Justice Bill are set to directly criminalize not only the sharing but also the creation of harmful deepfake images. This combination of adapting existing rules while building new ones gives the UK a strong advantage in addressing the problem.

3. India: Active Judiciary, Silent Legislature

India, with its massive population of internet users and growing digital media consumption, is particularly vulnerable to deepfake misuse. Yet, unlike the US or UK, India lacks specific legislation addressing deepfakes.

Nevertheless, Indian courts have begun taking action. As demonstrated in Gaurav Bhatia v. Naveen Kumar YouTube Channel, the Delhi High Court addressed a deepfake falsely depicting a senior lawyer and political spokesperson being assaulted. The Court swiftly recognized the threat and ordered removal of the videos, stating that the plaintiff’s reputation and dignity were violated and that such fabricated content is not protected by free speech laws.

This was a significant development because it showed that Indian judges understand the dangers and are prepared to intervene. However, it also revealed how heavily India relies on judicial action rather than legislative frameworks to address AI’s harmful effects. In the absence of clear legislation, victims often experience confusion and delays when seeking legal remedies.

Key Challenges and Recommendations for India’s Legal Vacuum

India’s large digital user base, intense political polarization, and lack of explicit legal protections make it particularly susceptible to deepfake abuse. Individual rights and democratic stability are at risk due to the intersection of deepfakes with electoral manipulation and gender-based violence.

Key Challenges

India faces several critical obstacles in addressing deepfake technology. First, there is no statutory definition of deepfakes or synthetic media, leaving courts without clear legal guidance. Second, no criminal provision specifically targets the creation or circulation of manipulated media, forcing reliance on general laws that may not adequately address the unique harms. Third, limited digital literacy and awareness among law enforcement hampers effective investigation and prosecution. Fourth, inconsistent content regulation between intermediaries and social platforms creates enforcement gaps. Finally, the judiciary can no longer be the sole institution responsible for balancing India’s dual constitutional commitments to freedom of speech under Article 19(1)(a) and protection of dignity under Article 21.

Recommendations

To address this legal vacuum and align with international best practices, several measures are urgently needed. India requires enactment of a Deepfake Prevention Act—a standalone statute clearly defining deepfakes, classifying types of harm such as defamation, non-consensual pornography, and election interference, and prescribing graded penalties. Mandatory watermarking and labeling should ensure that AI-generated content includes verifiable digital signatures or disclaimers, particularly in commercial or political contexts. A framework for platform accountability must legally require intermediaries to identify, block, and report harmful synthetic media, with penalties for failure to comply. Investigative tools and judicial training should provide police and courts with access to forensic tools and AI literacy to evaluate deepfake evidence more effectively. Digital literacy campaigns must be implemented at the grassroots level to help the public detect and report deepfakes. Finally, cross-border cooperation through adoption of international standards and collaboration with global digital regulators can help formulate effective reforms and limit misuse.

Conclusion

As artificial intelligence continues to transform our digital landscape, deepfakes have emerged as one of the most concerning instruments for deception and harm. This comparative analysis highlights the responses of the United States, United Kingdom, and India, each reflecting the opportunities and challenges specific to their legal and constitutional frameworks. The US struggles with tensions between accountability and free speech protections, while the UK has adopted a more flexible and reform-focused approach. India continues to operate in a legislative vacuum, leaving its citizens more vulnerable to harmful uses of synthetic media despite progressive judicial interventions.

However, legal reform alone is insufficient. The fight against deepfakes must also leverage technological solutions. Recent studies emphasize that deepfake detection is a rapidly evolving field in cybersecurity and AI, and that distinguishing fake content from authentic material is becoming nearly impossible without technological intervention. As one study notes, “Deepfake technology is progressing at an increasing pace… hence, our results list numerous cues for detecting deepfakes, and suggest harnessing AI in order to detect AI-generated fakes as an efficient combat strategy.”[7] The integration of blockchain technology, digital watermarks, and AI-based detection tools presents promising future directions by creating transparent, traceable, and immutable records of media content. Incorporating such technologies into a future Deepfake Prevention Act would strengthen evidentiary standards, fortify enforcement mechanisms, and promote cooperation between the legal system, technology sector, and civil society in India.

References

[1] Mika Westerlund, The Emergence of Deepfake Technology: A Review, 9 TECH. INNOVATION MGMT. REV. 39 (2019).
[2] Id.
[3] X Corp. v. Ellison, No. 0:25-cv-01649 (D. Minn. 2025).
[4] Moody v. NetChoice, LLC, No. 22-277 (U.S. 2024).
[5] R v. Brandon Tyler (Chelmsford Crown Ct. 2024).
[6] Gaurav Bhatia v. Naveen Kumar YouTube Channel, CS(OS) 274/2024 (Del. H.C. Apr. 16, 2024).
[7] Westerlund, supra note 1.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top