Published on: 27th December 2025
Authored by: Yashwi Fogla
MIT-WPU
I. Introduction
The emergence of artificial intelligence (AI) has revolutionised creative and communicative processes across the world. Among its most controversial by-products are deepfakes synthetic audio-visual content created through deep learning algorithms capable of mimicking real individuals with near-perfect accuracy. Initially confined to academic experimentation, deepfake technology has rapidly proliferated into the public domain, resulting in significant legal, ethical, and security challenges. In 2017, the term โdeepfakeโ entered mainstream consciousness when AI-generated pornographic videos featuring celebrities began circulating online without consent. Since then, deepfakes have evolved beyond mere entertainment, encompassing political propaganda, defamation, blackmail, misinformation campaigns, and cyber harassment.
The legal systems of most jurisdictions including India remain ill-equipped to regulate such synthetic media effectively. Existing laws on defamation, privacy, obscenity, and cybercrime provide partial remedies but fail to address the full range of harms that deepfakes can inflict. This article analyses the current legal vacuum surrounding deepfakes, examines their implications for rights and national security, and explores potential legislative and regulatory frameworks for effective governance in India and beyond.
II. Understanding Deepfakes and Their Operation
A deepfake refers to a form of synthetic media in which a personโs likeness, voice, or actions are artificially generated using deep learning, particularly Generative Adversarial Networks (GANs). These algorithms train two neural networks against each other the โgeneratorโ creates fake content, while the โdiscriminatorโ attempts to detect falsity. Through iterative learning, the generator improves until the produced media becomes virtually indistinguishable from authentic recordings.
While the technology holds legitimate applications in entertainment, education, and accessibility such as recreating lost historical footage or enabling voice synthesis for speech-impaired individuals it simultaneously poses immense risks. The same algorithm that resurrects historical figures for documentaries can be misused to fabricate videos of political leaders declaring war or individuals appearing in explicit material.
The primary legal concern is not merely the act of creation, but rather the intent and consequence of dissemination, which can range from reputational harm to threats to democratic stability.
III. The Legal Status of Deepfakes: A Comparative Overview
A. United States
In the United States, there is no comprehensive federal legislation regulating deepfakes. Certain states, however, have enacted specific laws. Texas[1] and California, for instance, criminalise the malicious creation or distribution of deepfakes intended to influence elections or depict non-consensual sexual activity. The Deepfake Report Act, 2019[2], introduced in the U.S. Congress, sought to mandate the Department of Homeland Security to conduct studies on deepfake technology but fell short of creating enforceable prohibitions.
Existing laws such as those on defamation, fraud, or harassment may offer partial redress, but they rely on proof of damage and intent, which are difficult to establish given the anonymity and speed of digital dissemination. Moreover, the First Amendment complicates efforts to regulate deepfakes due to the constitutional protection of free speech, even when the content is false but non-defamatory.
B. European Union
The European Union (EU) [3]has taken a broader data protection and platform accountability approach. Under the General Data Protection Regulation (GDPR)[4], the unauthorised use of personal likeness or biometric data may constitute a breach of privacy. Additionally, the forthcoming AI Act proposes risk-based regulation of AI systems, classifying deepfake generation as โhigh-riskโ when used for manipulative or deceptive purposes.
However, enforcement remains challenging, as many deepfake creators operate across borders and can easily exploit jurisdictional loopholes. The EUโs Digital Services Act (DSA) also imposes obligations on platforms to remove harmful or misleading content expeditiously once notified, yet the volume and velocity of deepfake dissemination outpace traditional takedown mechanisms.
C. India
India lacks any specific legislation addressing deepfakes. The Information Technology Act, 2000 (IT Act) [5]and related rules offer limited coverage. Section 66D (cheating by personation through computer resources) and Section 67 (publishing obscene material) can be invoked in certain cases, as can provisions of the Indian Penal Code (IPC) [6]concerning defamation (Sections 499โ500) and criminal intimidation (Section 503).
However, these provisions are ill-suited for technologically advanced offences. For instance, Section 66D presupposes an element of โcheating,โ which may not exist in deepfake dissemination intended solely for ridicule or misinformation. Likewise, existing obscenity laws were not designed to address non-consensual synthetic pornography.
In the absence of a specialised legal framework, victims are forced to navigate a fragmented landscape of civil and criminal remedies, none of which adequately address the scale and speed of digital harm.
IV. Legal and Ethical Implications of Deepfakes
A. Privacy and the Right to Reputation
Deepfakes violate the right to privacy, recognised as a fundamental right in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017)[7], which protects informational autonomy and individual dignity. The unauthorised use of an individualโs likeness for synthetic content constitutes a clear breach of informational privacy. Moreover, when used for defamation or sexual exploitation, deepfakes cause irreparable damage to reputation a right that, though not explicitly fundamental, has been judicially read into Article 21 of the Constitution.
B. National Security and Public Order
Beyond individual harm, deepfakes pose existential threats to national security. AI-generated propaganda and misinformation can erode public trust, destabilise elections, and foment violence. A fabricated video showing a military official issuing false commands or a political leader making inflammatory remarks could provoke diplomatic crises or civil unrest before verification is possible.
C. Evidentiary Challenges
Deepfakes also challenge the legal systemโs evidentiary framework. Under the Indian Evidence Act, 1872, electronic records are admissible subject to Section 65B certification. However, distinguishing between authentic and manipulated video evidence has become increasingly complex. This raises questions about the presumption of authenticity and burdens of proof in criminal trials.
V. The Legal Vacuum in Regulation
Despite their far-reaching consequences, deepfakes occupy a grey zone in Indian law. Several reasons contribute to this vacuum: technological ambiguity, jurisdictional limitations, platform liability, and lack of digital literacy. The Digital India Bill, intended to replace the IT Act, reportedly considers AI-generated misinformation as a key challenge, but as of this writing, no concrete draft has been enacted.
VI. Towards a Regulatory Framework
A. Need for Specific Legislation
A dedicated Deepfake Prevention and Accountability Act is essential to address creation, distribution, and misuse of synthetic media. Such legislation should criminalise the non-consensual production and dissemination of deepfakes intended to harm, deceive, or defraud. It should also recognise exceptions for satire, parody, and educational purposes, provided there is clear disclosure that the content is artificially generated.
B. Strengthening Platform Accountability
Intermediaries must bear shared responsibility. The existing Intermediary Guidelines and Digital Media Ethics Code, 2021 [8]can be expanded to mandate the use of watermarking, content provenance tags, and detection algorithms to flag synthetic media.
C. Promoting Technological Countermeasures
Regulation should be complemented by investment in detection technologies. Collaborative efforts between government agencies, academia, and industry can develop AI-based verification systems capable of analysing inconsistencies in facial movement, lighting, and audio.
D. Ensuring Due Process and Safeguards
Any legal framework must also safeguard freedom of expression. Overbroad criminalisation risks chilling legitimate artistic, journalistic, or satirical expression. Hence, clear distinctions must be drawn between harmful deepfakes and creative synthetic media.
VII. International Cooperation
Given the borderless nature of cyberspace, deepfake regulation requires international collaboration. India could lead regional efforts through frameworks akin to the Budapest Convention on Cybercrime, focusing on cross-border data sharing, harmonisation of evidentiary standards, and joint research programmes to develop global watermarking protocols and authenticity standards.
VIII. Ethical and Philosophical Dimensions
The deepfake phenomenon raises deeper questions about truth, consent, and human identity. If any individualโs likeness can be synthetically manipulated, the concept of personal autonomy itself faces existential threat. Legal regulation must therefore transcend punitive logic and embrace a rights-based ethical framework that prioritises consent, transparency, and accountability.
IX. Conclusion
Deepfakes represent both the ingenuity and peril of artificial intelligence. While their potential for creative innovation is undeniable, their misuse threatens to distort reality, manipulate consent, and destabilise institutions. The Indian legal systemโs current patchwork of provisions is inadequate to confront such technologically complex threats.
A coherent legal framework must rest on four pillars: statutory recognition of offences, shared responsibility, robust forensic infrastructure, and preservation of freedom of expression. Ultimately, law must reclaim truth from technological distortion, for in the age of deepfakes, defending the rule of law begins with defending the integrity of reality itself.
[1] Texas Election Code ยง255.004 (2019).
[2] Deepfake Report Act, 2019 (US Congress).
[3] European Commission, Proposal for a Regulation on Artificial Intelligence (AI Act, 2021).
[4] General Data Protection Regulation (EU) 2016/679.
[5] Information Technology Act, 2000 (India).
[6] Indian Penal Code, 1860.
[7] Justice K.S. Puttaswamy (Retd.) v Union of India (2017) 10 SCC 1.
[8] Intermediary Guidelines and Digital Media Ethics Code Rules, 2021 (India).




