Published On: August 21st 2025
Authored By: Aditi Mohari
School of Law, Devi Ahilya Vishwavidyalaya
Abstract
The rapid rise of deepfake technology AI-generated synthetic media that alters audio, video, or images creates immediate legal and ethical concerns for India’s digital society. Deepfakes impair individual privacy, consent to use, reputation, and national security and also increase misinformation, disinformation, and electoral tampering. The Indian legal architecture, including the Information Technology Act and Indian Penal Code, remains responsive and piecemeal, and therefore does not provide adequate redress for the complex harms posed by deepfakes. This paper uses a doctrinal and comparative legal analysis approach by analysing Indian laws in contrast to regulatory responses in the United States, European Union, and China. It provides an overview of judicial responses in India in addition to the formation of digital personality rights, but also notes the lack of a coordinated legislative approach. The paper concludes by calling for a prospective rights-based regulatory approach whether via watermarking, AI responsibility, platform responsibility, or public digital literacy frameworks. This view is based on Indian constitutional values with relation particularly to Articles 19 and 21 and defending access to innovation and freedom of expression, while preserving safeguards against digital harms. The research concludes by stating that without reform that anticipates the deepfake threat and builds public trust in the information digital realm, India runs the risk of lagging further behind in responding to the changing threat from deepfakes. Keywords: Deepfakes, Misinformation, Disinformation, Artificial Intelligence (AI), Generative Adversarial Networks (GANs), Non consensual pornography, Synthetic media, Identity theft.
Background and History
 The fast-paced technology development of Artificial Intelligence (AI) specifically generative technologies has created “deepfakes”, or realistic synthetic media manipulated audio, video, or images through deep learning algorithms. Initially developed for innovation or harmless amusement, deepfakes have since been weaponized for purposes such as misinformation, political propaganda, cyberbullying, identity theft, and non-consensual pornography. India’s vast population and widespread digital adoption make it particularly vulnerable to synthetic media threats.1 The proliferation of smartphones along with low cost internet, and the fact that more of our population is now socially engaged than ever before has fueled the consumption of synthetic content.2 The pattern of misinformation spreading faster than detection and disruption is troublesome since it challenges individual rights, democratic processes, national security, and public trust. India legally has no overarching legislation or policy directly focusing on deepfakes. There are currently laws offered as infringement remedies for deepfakes such as the Information Technology Act, 2000, the Indian Penal Code, 1860 or under certain sections of the Copyright Act, 1957. However, these broad legal frameworks are not specifically tailored to address the unique harms posed by AI-generated misinformation. In addition, ethical concerns around consent, digital identity, and the right to privacy as reiterated by the Supreme Court’s landmark Puttaswamy judgment (2017) demand a more sophisticated regulatory approach. The gap between the capabilities of the generators and detection technology continues to grow with advances in deepfake technology, creating the urgency for India to develop a unified ethical and legal framework that balances the benefits of innovation and freedom of expression against accountability, privacy and protection against harm Research Methodology In this study, I take a doctrinal and analytical approach, analyzing existing Indian legal provisions, judicial decisions, and constitutional principles with relevance to deepfakes and misinformation. I utilize comparative legal analysis of international legal frameworks (EU, U.S., China) and cases of significant deepfake incidents involving public figures. Through qualitative content analysis, and drawing on available government advisories, policy drafts and judgments, as well as legal treatises, I analyse the legal, ethical and social perspectives of deepfakes and misinformation. Although I do not collect my own empirical data, I try to ensure depth and significance through secondary sources and analyse to triangulate and corroborate the legal, ethical and social perspectives.
Research Problem
The increase of deepfakes has revealed major gaps in the legal framework in India, including the absence of recent provisions to address the epidemic misuse of AI-generated content as misinformation, identity theft, and non-consensual media. This research will explore how India might seek to create a strong, architecture-based legal solution that aims to balance variance and individual rights with digital responsibility and free speech.
Hypothesis
India’s continued reliance on pre-existing laws such as the IT Act and IPC is inadequate without distinct technology positive legal frameworks to regulate the abuse of deepfake images. Without a specialized, rights-oriented law, complementing AI detection, intermediary accountability, and civic awareness will be limited in its ability to protect the democratic cycle, individual rights, and the integrity of the digital products.
Recommendations
To effectively address the growing threat posed by deepfakes in India, this paper proposes a multi-pronged strategy combining legal, institutional, technological, and educational reforms. Legal and Regulatory Reforms
- Enact a Standalone Deepfake Regulation Act
- Introduce a stand-alone definition of “synthetic media” and “deepfake” as legal terms in a new statute
- Establish criminal (or civil) penalties for malicious creation, dissemination, or transmission of deepfakes without consent
- Statutory Recognition of Personality and Digital Rights
- Recognize and codify digital personality rights under Indian law to prevent unauthorized use to replicate likeness, voice, and visual identity by AI.
- Require that these rights recognized as extending Article 21 of the Constitution.
Â
Institutional and Platform-Level Accountability
- Mandatory Watermarking and Metadata Disclosure
- Require platforms and developers of AI to always embed watermarks or origin metadata into any content produced by AI.
- Strict Takedown and Reporting Regime
- Change the intermediary guidelines to enforce the removal of flagged deepfake content to 24 hours
- Require transparency reports on the number of takedown requests and action taken.
Conclusion
This research demonstrates that while India’s legal system and judiciary have begun to confront the harms of synthetic media, their responses have been hit or miss. Existing privacy and personality rights protect situations but lack statutory authority for consistent enforcement. Borrowing from the world’s experience, India should establish a forward-thinking, multi-level regulatory framework to balance future innovation with fundamental constitutional rights in Articles 19(1)(a), 19(2), and 21. Deepfakes regulation should incorporate the broader interpretations of national security and election manipulation, and include deepfakes regulation of harms, including issues of digital dignity, informational autonomy, and protection of citizens from information and communication harms.
To this end, India should:
- Establish targeted legislation specific to the definition of deepfake, and the implications of creating or dealing in deep fake content.
- Mandate disclosure, watermarking, and traceability requirements for media created by an AI
- Prioritize intermediary liability under statutory timelines for platforms.
- Establish national digital literacy programs and awareness programs to build user power.
It is not enough for legal development to happen in statutes, but in regulatory philosophy and ultimately regulatory preparedness. Without change, the unmade scaffold surrounding deepfakes will continue to undermine public trust, democratic future, and the rule of law.