Published On: April 22nd 2026
Authored By: Anshul Kumar Manik
University of Allahabad
I. Introduction
Artificial intelligence has dramatically transformed the digital landscape over the past decade. Modern AI systems are now capable of producing highly realistic images, audio recordings, and videos that can closely imitate real people. One of the most debated developments emerging from these technological advances is the rise of deepfake technology. Deepfakes refer to digitally manipulated or artificially generated media created through artificial intelligence techniques that replicate a person’s appearance, voice, or behavior in a convincing manner.[1] Deepfake technology works by using machine-learning models trained on large collections of images, audio samples, and videos. These systems learn patterns related to facial expressions, voice tone, and speech movements. After sufficient training, the algorithm can generate new content that appears authentic even though it is entirely fabricated. Because the resulting media often looks extremely realistic, distinguishing between genuine recordings and manipulated ones can be difficult for ordinary viewers.[2]
Despite its controversial reputation, deepfake technology is not inherently harmful. It has several legitimate uses in industries such as filmmaking, video gaming, education, and digital entertainment. Filmmakers may use the technology to recreate historical characters or digitally alter scenes, while educators may use it to create immersive learning tools. However, the same technology can also be misused for harmful purposes. In recent years, deepfakes have increasingly been used to spread misinformation, damage reputations, and produce explicit content without the consent of the individuals depicted.[3] The rapid circulation of manipulated media through social media platforms has intensified concerns about online harassment, misinformation, and digital manipulation.
India faces particular challenges in this context because it has one of the largest internet user populations in the world. Social networking platforms, video-sharing websites, and encrypted messaging applications allow information to spread rapidly across the country. Once manipulated media is uploaded online, it can quickly reach millions of viewers before its authenticity can be verified. Several incidents involving fabricated videos of celebrities and political personalities have demonstrated the disruptive potential of deepfake technology. In many cases, such videos create confusion among viewers and damage the reputation of the individuals involved. These incidents have raised important questions about whether existing legal mechanisms are sufficient to regulate such emerging technological threats.
Currently, India does not have a dedicated legal framework that directly addresses deepfake technology. Instead, authorities rely on existing cybercrime laws, criminal provisions, and regulatory rules governing online platforms. However, many of these laws were enacted before the rapid development of artificial intelligence technologies and therefore may not fully address the complexities associated with synthetic media. As a result, deepfake technology raises important legal questions regarding criminal liability, privacy protection, and the responsibility of digital platforms in controlling the spread of manipulated content.[4]
II. Legal Analysis
Deepfake technology is primarily driven by advanced artificial intelligence techniques known as deep learning. One of the most widely used methods involves generative adversarial networks (GANs). In this process, two neural networks interact with each other: one generates artificial content while the other attempts to determine whether the generated content is real or fake. Through continuous interaction and improvement, the system gradually learns to produce highly convincing synthetic media.[5]
1. Non-Consensual Explicit Content
The legal concerns associated with deepfakes largely arise from the various ways in which the technology can be abused. One of the most troubling forms of misuse involves the creation of non-consensual explicit content. In many cases, perpetrators digitally insert the face of an individual — often women or public figures — into explicit videos without their consent. These fabricated videos can cause severe emotional distress and reputational damage to the victims. Such acts are widely regarded as violations of privacy and personal dignity. Victims often face significant challenges when attempting to remove this content from the internet because, once it spreads across multiple platforms, complete removal becomes extremely difficult.
2. Political Misinformation
Another serious concern involves the use of deepfake technology in political misinformation. Manipulated videos may depict political leaders making statements that they never actually made. During election periods, such fabricated content can mislead voters, influence political debates, and undermine trust in democratic institutions. In a country like India, where millions of citizens rely on social media for news and political information, the impact of such misinformation can be substantial.[6] The rapid dissemination of manipulated videos may distort public perception before fact-checking mechanisms have time to respond.
3. Financial Fraud and Cybercrime
Deepfake technology has also been used in financial fraud and cybercrime. For example, criminals may create artificial audio recordings that imitate the voice of a company executive or government official. These recordings can then be used to deceive employees into transferring money or sharing confidential information.
4. Enforcement Challenges and Intermediary Responsibility
Despite these risks, Indian law does not currently treat the creation of deepfakes as a separate criminal offense. Instead, law enforcement authorities rely on existing provisions relating to identity theft, fraud, obscenity, and defamation. While these provisions can address certain aspects of deepfake misuse, they do not fully capture the technological complexity associated with AI-generated media. Another major difficulty lies in identifying the individuals responsible for creating deepfake content. The anonymity provided by the internet allows perpetrators to upload manipulated media without revealing their identity. Once the content spreads across multiple platforms, tracing the original source becomes extremely challenging. Even if authorities succeed in removing the original content, copies may continue circulating on other platforms or through private messaging services. Consequently, the harmful effects of the manipulated media may persist long after the initial upload.
Because of these challenges, the responsibility of digital intermediaries — such as social media platforms and online hosting services — has become a significant issue in regulatory discussions. These platforms play an important role in determining how quickly manipulated media spreads across the internet. Under Indian law, online intermediaries enjoy certain protections from liability provided they comply with due-diligence requirements and remove unlawful content once notified by authorities. However, critics argue that this notice-and-takedown system may not be sufficient in the context of deepfakes, which can spread rapidly within a short period of time.
As a result, some experts have suggested the development of proactive detection technologies capable of identifying manipulated media before it becomes widely distributed. Researchers are developing artificial intelligence tools that analyze subtle inconsistencies in videos, such as irregular blinking patterns, unusual lighting effects, or unnatural facial movements. Nevertheless, these detection systems remain imperfect because deepfake creation technologies continue to evolve.
5. Privacy and Data Protection
Deepfake technology also raises significant privacy concerns. In many cases, creators gather images and videos from publicly available sources on the internet to train artificial intelligence models. These images may belong to individuals who have never given consent for their likeness to be used in such systems. The unauthorized use of personal data in this manner raises serious questions regarding informational privacy and data protection.[7] Although India has strengthened its legal framework for personal data protection through the Digital Personal Data Protection Act, 2023, enforcing these protections becomes difficult when deepfake creators operate anonymously or from foreign jurisdictions.
Beyond the direct harm caused to individuals, deepfakes also threaten the credibility of digital information. As manipulated media becomes more common, people may begin to doubt the authenticity of genuine videos and recordings as well. This phenomenon is sometimes described as the “liar’s dividend,” where individuals accused of wrongdoing dismiss authentic evidence by claiming that it is fabricated. These developments illustrate that deepfake technology presents complex regulatory challenges that demand a comprehensive and forward-looking legal response.
III. Supporting Authority
Although India does not currently have legislation that specifically targets deepfake technology, several existing laws may still be applied depending on the circumstances of a particular case.
Information Technology Act, 2000
One important statute in this regard is the Information Technology Act, 2000, which regulates cybercrime and electronic communications in India. Section 66C of the Act criminalizes identity theft, including the fraudulent use of another person’s electronic credentials or identifying information. If deepfake technology is used to impersonate an individual in digital communications, this provision may be relevant. Section 66D addresses cheating by personation using computer resources, and may apply in situations where manipulated audio or video recordings are used to deceive individuals or obtain financial benefits.[8] The Act also contains provisions dealing with the transmission of obscene material in electronic form. Section 67 criminalizes the publication or circulation of obscene or sexually explicit content through digital platforms, and may be invoked in cases involving non-consensual deepfake pornography.
Bharatiya Nyaya Sanhita, 2023
In addition to cybercrime legislation, provisions under the Bharatiya Nyaya Sanhita, 2023 may also be applicable.[9] For example, offenses relating to defamation may arise when manipulated media damages an individual’s reputation. Similarly, provisions dealing with cheating or forgery may apply when deepfakes are used for fraudulent purposes.
Judicial Precedents
Judicial decisions have also shaped the legal framework for digital rights in India. In Justice K.S. Puttaswamy v. Union of India, the Supreme Court recognized the right to privacy as a fundamental right under Article 21 of the Constitution. This decision emphasized the importance of protecting personal autonomy and informational privacy in the digital age. Another important case is Shreya Singhal v. Union of India, where the Court clarified the responsibilities of online intermediaries regarding unlawful digital content. The judgment established that platforms must remove illegal content upon receiving appropriate legal notice.
Regulatory Framework
Regulatory efforts have also attempted to strengthen online accountability. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 require social media platforms to establish grievance redressal mechanisms and comply with due-diligence obligations when dealing with unlawful online content.[10]
International Developments
Several foreign jurisdictions have introduced laws specifically targeting deepfake misuse, particularly in the context of election interference and non-consensual explicit content. In the United States, for example, certain states have enacted legislation criminalizing the non-consensual distribution of deepfake intimate imagery and prohibiting deepfake political advertisements close to election dates. The European Union’s Artificial Intelligence Act similarly imposes transparency obligations on providers of AI systems capable of generating synthetic media. These initiatives demonstrate a growing global recognition that synthetic media presents unique regulatory challenges that may require specialized legal responses.
IV. Conclusion
Deepfake technology represents one of the most significant challenges arising from the intersection of artificial intelligence and digital communication. Although the technology has beneficial applications in fields such as entertainment, education, and digital innovation, its misuse can cause serious harm to individuals and society. In India, the current legal response to deepfakes relies largely on existing cybercrime laws and criminal provisions. While these laws provide certain legal remedies, they were not specifically designed to address AI-generated media manipulation. Consequently, authorities often face difficulties in identifying offenders, removing harmful content, and enforcing accountability.
As deepfake technology continues to become more sophisticated, it is increasingly important for policymakers to consider more comprehensive regulatory strategies. Possible measures could include introducing specific legal provisions addressing malicious deepfake creation, requiring clear labeling of AI-generated media, and strengthening the responsibilities of digital platforms in detecting manipulated content. At the same time, regulatory interventions must be carefully balanced with the need to protect freedom of expression and encourage technological innovation.
Ultimately, addressing the risks associated with deepfake technology will require cooperation between governments, technology companies, and civil society organizations. By developing clear legal standards and encouraging responsible technological practices, India can better respond to the challenges posed by synthetic media while safeguarding democratic values and individual rights in the digital era.
References
[1] Robert Chesney & Danielle Citron, Deepfakes and the New Disinformation War, 98 Foreign Aff. 147 (2019).
[2] Hao Li et al., DeepFake Detection: Current Challenges and Next Steps, IEEE International Conference on Multimedia & Expo (2020).
[3] Danielle Keats Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753 (2019).
[4] Aparna Chandra, Artificial Intelligence and Legal Regulation in India, 14 Nat’l L. Sch. India Rev. 89 (2021).
[5] Ian Goodfellow et al., Generative Adversarial Networks, 27 Advances in Neural Information Processing Systems (2014).
[6] Robert Chesney & Danielle Citron, Deepfakes and the New Disinformation War, 98 Foreign Aff. 147 (2019).
[7] Justice B.N. Srikrishna Committee, Report of the Committee of Experts on Data Protection Framework for India (2018).
[8] Information Technology Act, 2000, §§ 66C–66D (India).
[9] Bharatiya Nyaya Sanhita, 2023 (India).
[10] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (India).




