Published on: 23rd December 2025
Authored by: Snigdho Dhar
University of Engineering & Management
Abstract
Deepfakes, powered by artificial intelligence, have emerged as a significant threat to privacy, democracy, and cybersecurity. With the rise of AI technology, deepfakes have become a serious concern for legal systems worldwide. While India has laws addressing cybercrime and defamation, there is no specific regulation focused on synthetic media manipulation. This paper explores the current legal framework in India, highlights critical gaps in existing legislation, and examines how other countries are tackling the deepfake challenge. Finally, it suggests comprehensive policy measures to create effective laws for deepfake regulation in India.
Keywords: Deepfake laws, AI regulations, Cybercrime, Privacy, Indian law, Digital Ethics, Synthetic media
I. Introduction
What Are Deepfakes?
Deepfakes are AI-generated videos, images, or audio recordings that manipulate reality, making it appear as though someone is saying or doing something they never actually did. These synthetic media files are created using advanced machine learning techniques, particularly Generative Adversarial Networks (GANs), which enable remarkably realistic forgeries that are increasingly difficult to detect with the naked eye.
India has witnessed a troubling increase in deepfake-related crimes, including political propaganda, fake news dissemination, and identity theft. High-profile incidents—such as the use of AI-generated voices to spread misinformation during elections and the creation of fabricated celebrity videos—have raised serious concerns about the ethical and legal challenges posed by this technology. These incidents underscore the urgent need for a robust regulatory framework.
Legal Gaps and Challenges
Currently, India lacks a dedicated deepfake law. While existing provisions under the Information Technology Act, 2000[1] and the Indian Penal Code, 1860[2] cover some aspects of digital crime, they do not adequately address the unique challenges posed by AI-generated synthetic media. Section 66D of the IT Act (identity fraud), Section 67 (obscene content), and Section 500 of the IPC (defamation) provide partial coverage but remain insufficient to combat the sophisticated nature of deepfake technology.
II. Do India’s Existing Laws Adequately Regulate Deepfakes?
India does not have a dedicated law specifically targeting deepfakes. However, several provisions under existing legislation can be applied to deepfake-related offenses, though with significant limitations.
1. Information Technology Act, 2000
Section 66D – Punishment for Cheating by Personation Using Computer Resource: This provision punishes identity fraud and online impersonation, including through AI-generated deepfakes. The punishment includes up to three years of imprisonment and a fine of up to ₹1 lakh. However, this section was drafted before the emergence of sophisticated deepfake technology and does not specifically address the nuances of AI-generated impersonation.
Section 67 – Publishing or Transmitting Obscene Material in Electronic Form: This section criminalizes the transmission of sexually explicit or offensive deepfake videos. Violations carry punishment of up to five years in jail and a fine of up to ₹10 lakh. While applicable to obscene deepfakes, this provision does not cover non-sexual but equally harmful deepfake content used for defamation or political manipulation.
Section 69A – Power to Issue Directions for Blocking Public Access to Information: This provision allows the government to block deepfake content that threatens national security, sovereignty, or public order. However, the broad discretionary power raises concerns about potential censorship and lacks specific guidelines for deepfake identification and removal.
Section 72 – Breach of Confidentiality and Privacy: If a deepfake violates a person’s privacy by sharing manipulated private images or videos, this section applies. The punishment includes up to two years of imprisonment and a fine. However, the provision predates modern privacy concerns related to synthetic media and lacks specificity regarding AI-generated content.
2. Indian Penal Code, 1860
Sections 499 & 500 – Defamation: If a deepfake is created to harm a person’s reputation, it falls under defamation laws. The punishment includes up to two years of imprisonment, a fine, or both. However, proving defamation in deepfake cases presents unique challenges, particularly in establishing the creator’s identity and intent.
Sections 354A & 354D – Sexual Harassment and Cyber Stalking: These provisions apply when deepfakes are used to harass individuals, especially women, such as morphing images into obscene content. Section 354A carries a punishment of up to three years imprisonment, while Section 354D provides for up to five years. These sections, however, were not designed with AI-generated synthetic media in mind and may not adequately address the scale and sophistication of modern deepfake harassment.
Section 420 – Cheating and Fraud: If a deepfake is used for financial fraud, identity theft, or scams, this section applies. The punishment includes up to seven years of imprisonment and a fine. While broadly applicable, this provision lacks specific guidance on handling cases where AI technology is used to perpetrate fraud.
3. Other Relevant Laws
Indecent Representation of Women (Prohibition) Act, 1986: This act bans the publication of morphed images or videos of women.[3] While relevant, it does not specifically address AI-generated content or provide adequate penalties for sophisticated deepfake creation.
Representation of the People Act, 1951: If deepfakes are used to spread false political propaganda during elections, they can be challenged under this law.[4] However, the act lacks specific provisions for identifying and prosecuting deepfake-related electoral manipulation.
Digital Personal Data Protection Act, 2023: This recently enacted legislation protects against unauthorized use of personal data in AI-generated deepfakes.[5] While a positive development, its effectiveness in addressing deepfake-specific violations remains to be tested through implementation and judicial interpretation.
III. Legal Gaps and the Need for New Deepfake Laws
The analysis of existing legislation reveals several critical gaps:
1. Absence of Specific Deepfake Definition: No Indian law currently defines or directly regulates deepfakes as a distinct category of offense. This lack of definitional clarity creates ambiguity in prosecution and enforcement.
2. Inadequate Penalties: Current punishments may not be sufficiently stringent to deter large-scale AI-based crimes, particularly those involving sophisticated technology and organized criminal networks.
3. No Deepfake Detection Framework: India lacks a comprehensive policy on AI regulation and a national system for deepfake detection, verification, and rapid response to tackle threats systematically.
4. Platform Accountability Vacuum: Existing intermediary liability provisions under Section 79 of the IT Act do not adequately address the unique challenges of deepfake content moderation and removal.
5. Limited Technical Expertise: Law enforcement agencies, prosecutors, and the judiciary often lack specialized training in identifying and prosecuting deepfake-related crimes.
IV. Literature Review and Scholarly Perspectives
Deepfakes, as AI-generated synthetic media, have sparked extensive scholarly research and legal debates due to their potential to deceive and manipulate public discourse. This section highlights key studies and perspectives from notable researchers and institutions.
International Research Perspectives
Benjamin Sobel: Sobel, a lawyer and scholar of information law, conducted a comprehensive survey of enacted anti-deepfake statutes, analyzing their effectiveness and limitations.[6] His theoretical framework elucidates how deepfakes differ from traditional forms of deception, emphasizing the need for nuanced legal approaches that account for the technology’s unique characteristics.
The Deepfake Defense Study: A study titled “The Deepfake Defense—Exploring the Limits of the Law and Ethical Rules” delved into the ethical boundaries and procedural rules concerning deepfakes.[7] It examined the concept of the “deepfake defense” in criminal and civil proceedings and the ethical dilemmas lawyers might encounter when dealing with deepfake evidence, including questions of authenticity verification and expert testimony requirements.
American Civil Liberties Union (ACLU) Position: The ACLU has actively defended the constitutional right to create and share deepfakes, opposing legislation that may infringe upon First Amendment rights.[8] The organization argues that while deepfakes can be harmful, overly broad laws could suppress protected speech, including satire, parody, and political commentary. This position highlights the tension between protecting against malicious use and preserving free expression.
Indian Legal Framework and Judicial Response
India lacks specific legislation addressing deepfakes, and the term “deepfakes” is not defined under any Indian statute. Existing laws offer only indirect remedies. The Information Technology Act, 2000, through Sections 66E and 67, penalizes privacy violations and the electronic transmission of obscene material. The Indian Penal Code, 1860, through Sections 499 and 500, addresses defamation, which could be invoked in cases where deepfakes harm an individual’s reputation.
Despite these provisions, the absence of explicit laws targeting deepfakes creates significant enforcement challenges. The Indian judiciary has begun to address misuse in specific cases. For instance, actor Anil Kapoor sought legal protection against unauthorized use of his persona in deepfake content, highlighting the growing awareness of personality rights in the digital age.[9]
Identified Research Gaps
The under-researched nature of deepfakes in legal contexts can be attributed to several factors:
Rapid Technological Advancement: The swift evolution of deepfake technology outpaces legislative processes, leading to outdated or non-existent laws. By the time regulations are drafted, the technology has often advanced beyond the scope of proposed legislation.
Complexity in Detection: The increasing sophistication of deepfakes makes detection challenging, complicating legal enforcement and scholarly analysis. Traditional forensic methods often prove inadequate against advanced AI-generated content.
Balancing Regulation and Free Speech: Crafting laws that address the malicious use of deepfakes without infringing on free expression is inherently complex, leading to cautious legislative approaches. Policymakers struggle to define clear boundaries between harmful deepfakes and protected speech.
Limited Jurisprudence: The novelty of deepfakes means few cases have reached higher courts, resulting in a lack of judicial guidance and limited academic discourse on practical application of existing laws to deepfake scenarios.
Addressing these gaps requires comprehensive research into the legal implications of deepfakes, development of reliable detection technologies, and the formulation of balanced regulations that protect individuals without stifling innovation or legitimate expression.
V. International Legal Frameworks
Examining how other jurisdictions address deepfakes provides valuable insights for developing India’s regulatory approach.
1. European Union
Artificial Intelligence Act (AI Act): The EU has been proactive in regulating AI technologies, including deepfakes. The AI Act establishes specific requirements for high-risk AI systems, which encompass deepfake technology.[10] It mandates transparency, requiring disclosure that content is AI-generated, and imposes strict obligations on developers and deployers of such systems.
Digital Services Act (DSA): The DSA regulates providers and moderators of deepfake content.[11] It requires service providers to inform users when they are interacting with an AI system, unless this is obvious from context. Transparency relating to artificially generated or deepfake content is mandatory, and platforms face significant penalties for non-compliance.
2. United States
As of now, there is no comprehensive federal legislation specifically addressing deepfakes in the United States. Regulation has primarily occurred at the state level, with varying approaches. However, existing federal laws against child sexual abuse material (CSAM) apply strictly to deepfakes depicting minors.[12] The use of facial manipulation or deepfakes to create or distribute CSAM is unequivocally illegal and prosecuted under strict federal CSAM statutes. Several states, including California, Texas, and Virginia, have enacted specific anti-deepfake legislation targeting non-consensual pornography and electoral manipulation.
3. Canada
While Canada lacks specific deepfake legislation, existing laws offer some protections. The Criminal Code contains provisions against harassment, defamatory libel, and voyeurism that can be applied to harmful deepfake activities.[13] Privacy laws may address unauthorized use of an individual’s image. However, legal experts advocate for more explicit regulations to address the unique challenges posed by deepfakes, including the difficulty of identifying perpetrators and the cross-border nature of digital content.
4. Global Perspectives and Emerging Approaches
Internationally, approaches to deepfake regulation vary significantly:
Transparency Requirements: Some jurisdictions mandate that AI-generated content be clearly labeled to inform viewers of its artificial nature. This approach prioritizes consumer awareness while preserving creative freedom.
Criminalization of Harmful Uses: Certain countries have criminalized the creation and distribution of harmful deepfakes, particularly those involving non-consensual explicit content or political manipulation during elections. These laws typically include enhanced penalties when deepfakes target vulnerable individuals or threaten democratic processes.
Platform Accountability: Many jurisdictions are moving toward stricter intermediary liability rules, requiring social media platforms and content hosts to implement deepfake detection systems and respond promptly to takedown requests.
VI. Should We Introduce Criminal Penalties for Deepfake Misuse?
The introduction of criminal penalties for malicious deepfake usage is essential to deter wrongful acts and ensure accountability. The law should differentiate between intentional misuse and negligent misuse to determine appropriate penalties and avoid criminalizing legitimate uses such as satire, education, or artistic expression.
Proposed Criminal Penalties for Deepfake Misuse
1. Non-Consensual Deepfake Pornography
Punishment: Five to ten years imprisonment plus fine (₹5 lakh to ₹10 lakh)
Rationale: Non-consensual explicit content represents one of the most harmful uses of deepfakes, severely violating privacy, dignity, and causing lasting psychological trauma to victims. The punishment should reflect the gravity of this violation and include provisions for victim compensation.
2. Political Manipulation and Electoral Misinformation
Punishment: Three to seven years imprisonment plus fine (₹3 lakh to ₹7 lakh)
Rationale: Deepfake videos influencing elections or damaging political reputations can disrupt democracy, undermine public trust in institutions, and threaten national security. Enhanced penalties should apply during election periods or when targeting constitutional functionaries.
3. Fraud and Identity Theft
Punishment: Two to five years imprisonment plus fine (₹2 lakh to ₹5 lakh)
Rationale: Criminals can impersonate individuals using deepfake voice and video technology to steal financial information, obtain unauthorized access to accounts, or commit various forms of fraud. Penalties should be enhanced when targeting senior citizens or other vulnerable populations.
4. Defamation and Character Assassination
Punishment: One to three years imprisonment plus fine (₹1 lakh to ₹3 lakh)
Rationale: Deepfakes used to spread false allegations can severely damage an individual’s reputation, livelihood, and mental health. The law should provide for both criminal penalties and civil remedies, including damages for reputational harm.
5. Unauthorized Use of Celebrities and Public Figures
Punishment: Civil penalties (compensation for damages) plus fines (₹5 lakh to ₹15 lakh)
Rationale: Public figures often fall victim to deepfake advertisements, misleading endorsements, or unauthorized AI-generated appearances that exploit their image for commercial gain. This violates personality rights and can cause substantial economic harm.
Why Criminal Penalties Are Necessary
Criminal penalties serve multiple essential purposes. First, they provide victims with a clear legal pathway for justice, moving beyond the limitations of civil remedies alone. Second, they create accountability for platforms hosting deepfake content, incentivizing investment in detection and removal systems. Third, they establish a strong deterrent effect against malicious actors who might otherwise exploit the relative anonymity of digital platforms.
VII. Role of Courts, Policymakers, and Tech Companies
Regulating deepfakes requires coordinated collaboration between the judiciary, legislators, and technology companies to ensure an effective, balanced legal framework.
Strict Liability for Social Media Platforms
Currently, social media platforms operate as intermediaries under Section 79 of the IT Act and claim limited liability for user-generated content. India should introduce a strict liability policy making these platforms accountable for deepfake-related harm when they fail to act with reasonable diligence.
Proposed Social Media Regulations:
Rapid Response Requirement: Platforms must detect and remove harmful deepfake content within 24 hours of receiving notice or automated detection, with shorter timeframes for severe violations such as non-consensual pornography.
Financial Penalties: Heavy fines (₹50 lakh to ₹1 crore) for allowing deepfake misinformation to spread, with escalating penalties for repeat violations and systemic failures in content moderation.
Identity Verification: Mandatory verification before users can create or upload certain categories of synthetic media, balanced with privacy protections and anonymous speech considerations.
Transparency Reporting: Platforms must publish regular reports on deepfake detection rates, takedown requests, and enforcement actions to enable public accountability.
Why Platform Accountability Is Important
Social media platforms serve as the primary distribution channels for deepfakes. Fake political videos can manipulate elections and distort public perception of candidates and policies. Deepfake scams cost individuals and businesses millions in fraud annually, with financial losses escalating. Holding platforms accountable will encourage responsible AI deployment, investment in detection technology, and proactive content moderation rather than reactive responses.
VIII. Case Law Analysis: Deepfakes in Indian Courts
Swami Ramdev v. Facebook, Google & YouTube (2019)
Bench: Justice Prathiba M. Singh, Delhi High Court
Legal Provisions Involved: Information Technology Act, 2000 (Section 79—Intermediary Liability); Defamation laws under IPC (Sections 499 and 500); Fundamental Rights (Article 19—Freedom of Speech and Expression versus Right to Reputation)[14]
Facts of the Case: Swami Ramdev, a prominent yoga guru and public figure, filed a case against social media giants Facebook, Google, YouTube, and Twitter (now X). He sought the removal of defamatory content from these platforms, alleging that they hosted videos and posts that misrepresented facts about him and damaged his reputation. The content in question was uploaded by third parties and contained allegedly false and defamatory statements. Ramdev argued that these platforms must remove such content globally, not merely within India’s territorial jurisdiction.
Issues Before the Court:
First Issue: Can Indian courts order social media platforms to remove content globally, beyond India’s territorial jurisdiction?
Second Issue: Are intermediaries (such as Facebook, YouTube, and Google) liable for hosting defamatory content uploaded by third-party users?
Third Issue: What are the obligations of these platforms under Section 79 of the IT Act, 2000, which provides safe harbor protection to intermediaries who promptly remove unlawful content upon notice?
Judgment and Key Holdings:
The Delhi High Court delivered a landmark ruling with significant implications for content regulation:
Global Removal Orders: Social media platforms must remove defamatory content globally, not just within Indian territorial jurisdiction. The Court held that limiting removal to India alone would be ineffective, as users could simply access the content through servers in other jurisdictions.
Intermediary Responsibility: Intermediaries are responsible for ensuring that defamatory content does not remain accessible, especially after receiving a takedown request. Passive hosting does not absolve platforms of all liability.
Limited Safe Harbor Protection: Section 79 of the IT Act does not provide absolute immunity to platforms if they fail to act expeditiously on reported defamatory content. The safe harbor protection is conditional on prompt action upon gaining knowledge of unlawful content.
Extraterritorial Jurisdiction: Courts in India can issue global content removal orders if the defamatory content affects Indian citizens and the platforms conduct business within India’s jurisdiction.
Relevance to Deepfake Regulation in India:
This case establishes crucial precedents for deepfake regulation. Deepfakes often contain defamatory content, including fabricated videos of celebrities or politicians making false statements. The Swami Ramdev case creates a legal foundation for takedown requests of deepfake content from global platforms. Courts can order Google, Facebook, YouTube, and other platforms to remove deepfake content worldwide if it harms an individual’s reputation or violates their rights. The judgment highlights the ongoing tension between free speech protections and preventing online harm—a balance that deepfake regulation must carefully maintain. The case demonstrates judicial willingness to hold platforms accountable and issue extraterritorial orders when Indian citizens’ rights are violated.
IX. Conclusion
This research has demonstrated that while deepfakes possess legitimate applications in entertainment, education, accessibility, and creative expression, they are increasingly weaponized for misinformation, defamation, identity theft, financial fraud, and political manipulation. The technology’s dual-use nature demands a nuanced regulatory approach that protects against malicious use while preserving beneficial applications.
India’s current legal framework lacks specific laws addressing AI-generated synthetic media. Existing provisions under the Information Technology Act, 2000, the Indian Penal Code, 1860, and other statutes offer only partial coverage and were not designed to handle the unique challenges posed by deepfake technology. These laws suffer from definitional ambiguity, inadequate penalties, limited enforcement mechanisms, and insufficient technical infrastructure for detection and verification.
Comparatively, countries including the United States, the European Union, China, and the United Kingdom have begun drafting or implementing targeted regulations to curb deepfake misuse. The EU’s AI Act and Digital Services Act represent comprehensive approaches that balance innovation with accountability. American states have enacted piecemeal legislation addressing specific harms. These international developments put India at risk of falling behind in AI governance and leave Indian citizens vulnerable to cross-border deepfake attacks.
Key Findings of This Study:
This research has analyzed the critical legal gaps in India’s current approach to deepfake regulation, identifying the absence of specific legislation, inadequate penalties, and lack of institutional capacity. It has examined the multifaceted threats posed by deepfakes across political, financial, media, and national security domains, demonstrating the technology’s potential to undermine democratic processes, erode public trust, and cause severe individual harm.
The study has conducted a comparative analysis of international legal frameworks, highlighting best practices and cautionary examples from other jurisdictions. It has developed comprehensive policy recommendations, including the need for a Deepfake Prohibition Act with clear definitions and graduated penalties, AI transparency measures requiring disclosure of synthetic media, investment in deepfake detection technology and forensic capabilities, blockchain or digital watermarking for authentication of original content, enhanced social media accountability through strict liability provisions, and digital literacy programs to educate citizens about identifying and responding to deepfakes.
The Case for a Dedicated Deepfake Law in India
The findings of this research strongly support the need for a dedicated Deepfake Law in India. This legislation should clearly define deepfake-related offenses across various categories—including non-consensual pornography, political manipulation, fraud, and defamation—and establish strict criminal penalties proportionate to the harm caused. The law must mandate AI transparency and accountability, ensuring that deepfake-generating software is regulated, users are verified, and synthetic content is clearly labeled.
India needs to create a national deepfake monitoring system with dedicated resources for real-time detection, verification, and rapid response to emerging threats. This system should coordinate between law enforcement agencies, cybersecurity experts, and judicial authorities. The law must hold social media and technology platforms accountable for deepfake content through strict liability provisions, while maintaining appropriate safe harbors for platforms that implement robust detection and removal systems.
Finally, India must promote AI literacy among citizens to enable independent fact-checking and critical media analysis. Educational initiatives should target all age groups and socioeconomic segments, with particular focus on vulnerable populations most susceptible to deepfake manipulation.
Final Recommendations
Without comprehensive legal intervention, deepfakes will continue to threaten public trust, personal security, democratic processes, and social harmony. As AI technology advances exponentially, India must proactively regulate deepfakes rather than reacting after irreparable damage is done. The time for action is now—legislative delay risks allowing deepfake technology to entrench itself beyond effective regulatory control.
This research calls upon policymakers, legal scholars, technology companies, civil society organizations, and citizens to engage in informed dialogue about deepfake regulation. The goal must be to develop a legal framework that protects fundamental rights, enables innovation, preserves free expression, and safeguards against malicious exploitation of synthetic media technology. India’s response to the deepfake challenge will serve as a model for other developing nations grappling with similar issues at the intersection of technology, law, and society.
References
[1] Information Technology Act, No. 21 of 2000, INDIA CODE (2000).
[2] Indian Penal Code, No. 45 of 1860, INDIA CODE (1860).
[3] Indecent Representation of Women (Prohibition) Act, No. 60 of 1986, INDIA CODE (1986).
[4] Representation of the People Act, No. 43 of 1951, INDIA CODE (1951).
[5] Digital Personal Data Protection Act, No. 22 of 2023, INDIA CODE (2023).
[6] Benjamin L.W. Sobel, A New Common Law of Web Scraping, 25 LEWIS & CLARK L. REV. 147 (2021). [Note: Specific citation to deepfake survey work requires verification]
[7] The Deepfake Defense—Exploring the Limits of the Law and Ethical Rules, AMERICAN BAR ASSOCIATION (2022). [Complete citation requires verification]
[8] American Civil Liberties Union, Free Speech, Technology, and the First Amendment, available at https://www.aclu.org (last visited Dec. 23, 2025).
[9] Anil Kapoor v. Simply Life India & Ors., Delhi High Court, CS(COMM) 652/2023 (India). [Complete citation requires verification]




