Published On: April 22nd 2026
Authored By: Abhishek Dubey
Lucknow Law College
I. Introduction
The explosion of Artificial Intelligence (AI) is not just a technological milestone — it is a legal minefield. We are currently grappling with deepfakes: AI-crafted illusions so realistic they can mimic anyone’s voice or face with terrifying accuracy. While film studios use this technology for creative purposes such as de-ageing actors, its darker applications are wreaking havoc on privacy, dignity, and the very concept of free expression.
India has become ground zero for this crisis. Over the last two years, manipulated clips of celebrities and private citizens have flooded social media platforms. This surge forces us to ask: Is our legal toolkit — built on the Information Technology Act, 2000[1] and the Constitution of India, 1950[2] — actually equipped for an era where “seeing is no longer believing”?
II. The GAN Problem: Innovation vs. Risk
Deepfakes rely on Generative Adversarial Networks (GANs) — a class of machine learning architecture in which two neural networks compete against each other. One network generates synthetic media while the other attempts to detect it as fake, pushing the technology to a point where neither human eyes nor conventional software can reliably tell the difference.
The danger here is not merely “fake news.” It is a direct assault on identity. When a person’s likeness is non-consensually placed in a defamatory or sexual context, the damage to their reputation is near-instant and often irreversible. Furthermore, in a democracy, deepfakes do not just mislead — they poison the well of public trust, particularly during high-stakes electoral processes.
III. The Constitutional Tightrope
The legal battle against deepfakes sits at the intersection of two fundamental rights: privacy and freedom of speech and expression. Any legislative response must navigate this tension with constitutional precision.
Identity as a Constitutional Value: In the landmark ruling in Justice K.S. Puttaswamy (Retd.) v. Union of India,[3] a nine-judge bench of the Supreme Court of India unanimously held that the right to privacy is a fundamental right protected under Articles 14, 19, and 21 of the Constitution. Deepfakes directly violate this right by stripping individuals of control over their own digital persona and identity.
The Free Speech Bar: However, AI-generated content cannot be banned outright. Article 19(1)(a) of the Constitution guarantees freedom of speech and expression, and the Supreme Court’s decision in Shreya Singhal v. Union of India[4] firmly established that laws restricting speech must not be “vague” or “overbroad.” The challenge, therefore, is crafting legislation that stops the harm of malicious deepfakes without suppressing legitimate satire, parody, or artistic expression.
IV. Can Current Laws Keep Up?
India presently lacks a dedicated “Deepfake Act,” compelling courts and complainants to repurpose existing statutory provisions under the Information Technology Act, 2000. The following table illustrates the key provisions invoked and their limitations:
| Provision | Current Use-Case | The AI-Era Flaw |
|---|---|---|
| Section 66C[5] | Identity theft through fraudulent use of electronic signatures or passwords. | Does not address non-consensual use of a person’s visual likeness or voice. |
| Section 66D[6] | Cheating by impersonation using a computer resource. | Requires proof of fraudulent intent; does not cover reputational harm without financial deception. |
| Section 66E[7] | Privacy violations, including non-consensual image capture. | Narrowly drafted; does not explicitly cover general likeness theft or AI-synthesised imagery. |
| Sections 67 & 67A[8] | Transmission of obscene and sexually explicit electronic content. | Focuses on the nature of content (“obscenity”), not the act of manipulation or non-consensual fabrication. |
| Section 79[9] | Safe harbour protection for intermediaries acting in good faith. | Intermediaries often react too slowly to viral content; the “notification” mechanism cannot match the speed of algorithmic distribution. |
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021[10] mandate that platforms remove flagged illegal content upon notification. Notably, in October 2025, the Ministry of Electronics and Information Technology (MeitY) proposed amendments to these Rules to specifically address AI-synthesised and deepfake content — a significant step, though implementation remains pending.[11] In practice, however, by the time a formal complaint is filed and processed, the offending deepfake has often already reached millions of viewers. The law continues to move at a deliberate pace while the technology advances exponentially.
V. Fixing the Framework: A Proactive Shift
Relying on repurposed statutes is, at best, a stopgap measure. India requires a multi-pronged, forward-looking legal strategy:
1. Direct Legislation: A dedicated law must define “malicious synthetic media” with precision, establish clear criminal penalties for creation and distribution with intent to deceive, and provide a civil remedy for reputational harm — all while carving out explicit exceptions for satire and artistic use.
2. Mandatory Labelling: Platforms should be legally required to implement AI-detection mechanisms and digitally “watermark” AI-generated or AI-manipulated content at the point of upload. If it is synthetic, it must be tagged as such — immediately and conspicuously.
3. Speedy Redress: A dedicated “takedown-on-demand” mechanism for clear cases of identity theft through deepfakes — modelled on the urgency of injunctive relief — would bypass the slow bureaucratic delays inherent in the standard complaint-and-notification process.
VI. Conclusion
Deepfakes represent a fundamental shift in how democratic societies perceive and construct truth. India’s current legal framework provides some defence, but it was never architected for this kind of algorithmic threat to identity and public discourse. To protect citizens’ dignity and preserve the integrity of democratic deliberation, our laws must evolve from being reactive “takedown” instruments to proactive systems of detection, deterrence, and redress. The goal must be to strike a principled constitutional balance: one where artificial intelligence can flourish as a tool of creativity and progress, but never at the expense of the fundamental rights enshrined in our Constitution.
References
[1] Information Technology Act, No. 21 of 2000, INDIA CODE (2000).
[2] INDIA CONST.
[3] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 (India).
[4] Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
[5] Information Technology Act § 66C (2000).
[6] Information Technology Act § 66D (2000).
[7] Information Technology Act § 66E (2000).
[8] Information Technology Act §§ 67, 67A (2000).
[9] Information Technology Act § 79 (2000).
[10] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, G.S.R. 139(E) (India).
[11] Ministry of Electronics and Information Technology, Proposed Amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Oct. 22, 2025) (addressing AI-synthesised and deepfake content).




