Published On: August 21st 2025
Authored By: Dutta Chandra Varshini
Vellore Institute Of Technology, Andhra Pradesh
ABSTRACT
The rapid proliferation of deepfake technology-propelled by artificial intelligence-has posed a significant challenge to legal frameworks globally. In India, the absence of a specific legal architecture to counter the misuse of synthetic media leaves individuals vulnerable. Deepfakes have already been implicated in serious incidents involving non-consensual pornography, online fraud, police manipulation, and identity theft-some even attracting the attention of high-level regulatory bodies like Reserve Bank of India.[1]
This paper critically examines the existing legal response to deepfakes under Indian law, especially provisions in the Information Technology Act,2000, and the Bharatiya Nyaya Sanhitha (BNS). It highlights the inadequacy of current statutes to address the unique threat posed by synthetic media. The study also explores ethical concerns tied to privacy and dignity, compares global legislatuve trends, and argues for the creation of a dedicated legal framework with robust takedown mechanisms and punitive provisions.
INTRODUCTION: WHAT ARE DEEPFAKES?
Deepfakes refer to images, videos, or audio recording that have been artificially generated or manipulated using AI to realistically depict individuals doing or saying things they never did. This manipulation is increasingly being used for malicious purposes, including financial scams, identity theft, and political misinformation.
India has already witnessed a surge in such incidents, including one where a deepfake video falsely portraying RBI official went viral[2]. The major concern lies in the fact that once such content is released into public domain, detection usually comes too late to prevent social or reputational harm. Furthermore, the current lack of a legal framework tailored to this issue has amplified the risks.
ETHICAL AND CONSTITUTIONAL CONCERNS
One of the central ethical issues surrounding deepfakes is the violation of an individual’s right to privacy, as recognized in K.S. Puttuswamy v. Union of India [(2017) 10 SCC 1][3],Where the Supreme Court held privacy as intrinsic to Article 21 of the Constitution. Deepfakes frequently involve the unauthorized use of a person’s likeness, voice, or image, infringing upon their control over personal data.
Further, deepfake-based explicit content seriously compromises a person’s right to dignity, also protected under Article 21. These violations are not just personal but societal, reinforcing gender-based harm and undermining trust in digital information.
In the realm of public disclosure, deepfake have been weaponized to spread disinformation. A notable case involved a deepfake video falsely claiming RBI endorsement of a fraudulent investment scheme. Such fabrications have also been used to malign political figures and manipulated public opinion, destabilizing the very foundations of democracy and free speech.
EVIDENCE OF MISUSE
Several incidents highlight the dangers of unregulated deepfake technology. In 2023, the Delhi Commission for Women investigated a case where explicit deepfake images of a women were circulated online without consent[4]. In deepfake videos promoting illegal gambling apps, misleading both the public and investors.
These episodes led to legal petitions, including one before the Delhi
High Court, which prompted the Ministry of Electronics and Information Technology (MeitY) to form a committee to assess and recommend measures on the issue.[5] These cases reflect the broader threat synthetic media poses to law and order and the urgent need for regulatory intervention.
COMPARATIVE LEGAL PERSPECTIVES
Many jurisdictions have already taken proactive steps to tackle deepfake misuse:
United States: The Deepfakes Accountability Act proposes mandatory disclosures for synthetic media and penalizes malicious intent behind creation and distribution[6]. Some states, like California and Texas, have criminalized deepfakes used for election interference or revenge porn.
European Union: The Digital Services Act and the upcomig Artificial Intelligence Act [7]include provisions for labeling AI-generated content and hold online platforms accountable for the spread of harmful deepfakes.
China: In 2023, China enacted regulations requiring explicit consent and watermarking for any AI-generated synthetic media content[8]. Non-compliance invites heavy fines or bans.
In contrast, India lacks any comparable national legislation, placing its citizens at a higher risk of synthetic content abuse.
CASE LAW ANALYSIS
- S. Puttuswamy v. Union of India [(2017) 10 SCC 1][9]: This landmark judgment reaffirmed the right to privacy as a fundamental right under Article 21. In the context of deepfakes, it supports the argument that non-consensual creation or distribution of manipulated media violates a citizen’s autonomy and dignity.
- TV Today Network Ltd. V. Google LLC & Ors.,2022 SCC OnLine Del 3180 (Del HC)[10]: The case involved impersonation using synthetic content. The Delhi High Court acknowledged intermediary liability in cases involving deepfakes. It emphasized that platform hosts have an obligation to act once notified.
- Chaitanya Rohilla V. Union of India, 2020 SCC OnLine SC 719[11]:The petitioner urged legal reform due to increasing misuse of deepfakes in spreading misinformation. The court prompted MeitY to constitute a committee to explore legislative remedies.
These judgments form the jurisprudential foundation for recognizing and regulating deepfake crimes in India.
CHALLENGES IN LAW ENFORCEMENT
Lack of awareness: Most investigating officers are unfamiliar with the technical workings of synthetic media.
Absence of digital forensics infrastructure: India lacks widespread access to advanced forensic tools that can detect deepfake alteration.
Delay in platform response: Even when content is reported, social media intermediaries often delay takedown or verification due to inadequate legal pressure.
Jurisdictional barriers: Deepfake content often originates or is hosted overseas, complicating enforcement through mutual legal assistance treaties (MLATs).
Unless enforcement agencies are properly trained and equipped, even strong laws will be difficult to implement effectively.
INTERMEDIARY LIABILITY AND PLATFORM RESPONSIBILITY
Under India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules,2021[12], intermediaries are required to act on flagged content within 36 hours. However, these rules are limited in scope:
Platforms are not required to detect deepfakes proactively.
There is no mandate to watermark or label AI generated content.
Algorithmic transparency remains minimal.
Major social media and content platforms must be made accountable for both detection and removal, especially when they have the capability to auto-flag and remove other content types like copyright violation or hate speech.
POLICY RECOMMENDATIONS AND THE WAY FORWARD
A multipronged approach is essential to combat deepfakes in India:
1. Legal Reform
Enact a comprehensive Deepfake Regulation Act that:
Defines synthetic media and deepfakes clearly.
Establishes punishments based on intent and harm.
Includes civil remedies for victims, including compensation and takedown rights.
2. Institutional Mechanism
Mandate watermarking and labeling for all AI-generated content.
Introduce algorithmic accountability to ensure responsible AI deployment.
Penalize repeated negligence in taking down flagged content.
3. Capacity Building
Train police and judicial officers in detecting, analyzing, and prosecuting deepfake crimes.
Allocating budget for deepfake detection tools and digital forensics labs across Indian states.
4. Public Awareness
Launch digital literacy programs targeting students, journalists, and content creators.
Promote responsible sharing practices and encourage reporting of synthetic media misuse.
CONCLUSION
Deepfakes represent one of the most sophisticated and dangerous tools in the modern digital threat landscape. In the Indian context, where legal evolution often trails technological development, the unchecked use of such synthetic content endangers civil liberties, erodes public trust, and compromises institutional credibility.
While India has initiated preliminary efforts-such as forming committees and using general IT law provisions-these measures fall short of offering a robust response. The need of the hour is a dedicated legislative framework, institutional preparedness, and a digitally literate citizenry. Without these reforms, the victims of deepfake will continue to carry the burden of harm, while perpetrators operate freely in a virtual wild west.
REFERENCES
[1] Karthik madhu, Deepfakes and Indian law: Bridging the legal Gaps in the Era of Synthetic Media, Presidency University.
[2] Reserve Bank of India,”Press Release on Deepfake Investment Scam Clarification”,2023
[3] K.S. Puttuswamy v. Union of India [(2017) 10 SCC 1]
[4] Delhi Commission for Women, “Notice Regarding Deepfake Pornography Cases”,2023
[5] Bar and Bench, “Delhi HC hears Plea on Deepfake; MeitY forms Committee”,2023
[6] U.S. Congress, Deepfakes Accountability Act, H.R. 3230,2019
[7] European Commissions, Digital Services Act and Artificial Intelligence Act Proposals,2022
[8] Cyberspace Administration of China, Provisions on the administration of Deep Synthesis Internet Information Services,2023
[9] K.S. Puttuswamy v. Union of India [(2017) 10 SCC 1]
TV Today Network Ltd. V. Google LLC & Ors.,2022 SCC OnLine Del 3180 (Del HC)[10]
[11] Chaitanya Rohilla V. Union of India, 2020 SCC OnLine SC 719
[12] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules,2021,Government of India