Published On: August 21st 2025
Authored By: Anjali Bansal
LNCT University, Indore
Introduction
In the digital age, the line between reality and fiction is becoming increasingly blurred due to advancements in artificial intelligence (AI). One of the most controversial by-products of AI is “deepfakes”—synthetic media generated by machine learning algorithms that can convincingly mimic real people’s faces, voices, and mannerisms. While deepfakes can be entertaining or educational when used responsibly, they pose serious threats to privacy, reputation, democracy, and national security when weaponized.
Despite their growing prevalence, the legal framework to address deepfakes remains inadequate or inconsistent across jurisdictions. This article explores the technological underpinnings of deepfakes, examines the legal challenges they pose, analyzes existing legislation in India and other jurisdictions, and proposes recommendations to bridge the legal vacuum.
What are Deepfakes?
“Deepfake” is a portmanteau of “deep learning” and “fake.” It refers to media generated using generative adversarial networks (GANs) or similar AI techniques that create hyper-realistic video, audio, or image content that appears authentic but is entirely fabricated.
When edited pornographic films of celebrities started to appear on Reddit in 2017, it was the first significant public contact with deepfakes. Since then, deepfakes have expanded into:
- Political misinformation (e.g., fake speeches by world leaders)
- Financial fraud (e.g., voice cloning for scams)
- Revenge porn and cyberbullying
- Corporate espionage and blackmail
The Legal Vacuum: Why Deepfakes Are Hard to Regulate
Despite the risks, existing legal frameworks struggle to keep up with the nuances of synthetic media for several reasons:
1. Lack of Specific Legislation
Most countries, including India, lack specific laws that directly address the creation, distribution, or misuse of deepfakes.
2. Difficulty in Attribution
Deepfakes are often created and circulated anonymously or from overseas servers, making it hard to trace the perpetrators.
3. Ambiguity in Harm
Deepfakes can be humorous or parodic in some contexts and malicious in others. This makes regulation a delicate balancing act with free speech and fair use rights.
4. Technological Arms Race
Detection tools are playing catch-up with generation tools, creating a reactive rather than proactive enforcement landscape.
Legal Frameworks in India
India lacks standalone legislation governing deepfakes or synthetic media. However, in cases involving deepfakes, certain laws from the Indian Penal Code, 1860, the Information Technology Act, 2000, and the soon-to-come Digital India Act may be used.
1. Information Technology Act, 2000
- Section 66E—Punishes violation of privacy by capturing, publishing, or transmitting the image of a private area without consent.
- Sections 67 and 67A—Penalize publishing obscene or sexually explicit content in electronic form.
- Section 69A—Empowers the government to block public access to any information for reasons like public order or morality.
2. Indian Penal Code, 1860
- Section 354C – Addresses voyeurism.
- Section 500 – Criminal defamation for harming a person’s reputation.
- Section 509 – Words, gestures, or acts intended to insult the modesty of a woman.
Case Reference:
In X v. Union of India (2021), the Delhi High Court directed social media platforms to take down a non-consensual deepfake video of a woman, underscoring the right to privacy under K.S. Puttaswamy v. Union of India (2017).
However, these provisions are often reactive and fragmented, failing to comprehensively cover the full spectrum of deepfake harms.
International Legal Approaches
United States
The U.S. does not have federal deepfake laws, but several states have enacted legislation:
- AB 602 (2019) in California makes it illegal to utilize deepfakes in non-consensual pornography.
- Texas’s SB 751 (2019) makes it illegal to publish deepfakes with intent to influence elections.
At the federal level, the DEEPFAKES Accountability Act (proposed in 2019) seeks to require watermarks and metadata tagging in AI-generated media.
European Union
The EU AI Act (proposed 2021) categorizes deepfakes under high-risk AI applications and mandates transparency obligations. Article 52 specifically requires that users be clearly informed when content is artificially generated or manipulated.
The General Data Protection Regulation (GDPR) may also be applicable in cases where deepfakes violate personal data rights.
Free Speech vs. Regulation
A key tension in regulating deepfakes lies in balancing freedom of expression with protection from harm. Article 19 of the Indian Constitution guarantees free speech but allows reasonable restrictions for public order, decency, and morality.
Deepfakes used for satire or parody might be protected under free speech, while malicious deepfakes—especially those involving impersonation, defamation, or harassment—fall outside its ambit.
Judicial Perspective:
The Supreme Court stressed in Shreya Singhal v. Union of India (2015) that ambiguous regulations that criminalize online speech—such as Section 66A of the IT Act—are unconstitutional. This means any future deepfake regulation must be narrowly tailored to avoid arbitrary enforcement.
Deepfakes and the Right to Privacy
The creation and circulation of deepfakes—especially sexually explicit or defamatory ones—constitute a grave violation of the right to privacy, recognized as a fundamental right under Article 21.
In Justice K.S. Puttaswamy v. Union of India (2017), the Supreme Court held that informational privacy is a subset of the right to privacy. Deepfakes, especially those that use biometric mimicry like face or voice, infringe upon this right.
Role of Intermediaries and Platforms
Social media platforms and content hosts act as intermediaries under Section 79 of the IT Act and enjoy “safe harbor” if they act as passive conduits. However, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, impose greater obligations:
- Must remove content within 36 hours of receiving a government or court order.
- Must deploy AI-based tools to detect harmful content proactively (Rule 4(4)).
Criticism: These rules, while useful for tackling deepfake proliferation, are often criticized for chilling effects on free speech and overburdening smaller platforms.
Technological and Ethical Considerations
Detection Tools
Many tech companies are investing in detection mechanisms:
- Microsoft’s Video Authenticator
- Facebook’s Deepfake Detection Challenge
- Sensity AI’s detection-as-a-service model
However, detection is inherently reactive and often fails at scale or against sophisticated GANs.
Ethical Use
AI ethicists recommend the development of consent-based models for synthetic media and watermarking technologies. The Partnership on AI has issued best practices for synthetic media generation, calling for transparency, consent, and traceability.
Recommendations for India
- Dedicated Legislation: India must enact a Deepfake Regulation Act to criminalize malicious deepfakes, define key terms (e.g., synthetic media, biometric spoofing), and differentiate between benign and harmful uses.
- Consent-Based Models: Introduce legal requirements for explicit consent from individuals whose likeness is used in any synthetic content.
- Watermarking Standards: Mandate tamper-proof watermarks and metadata for AI-generated media.
- Intermediary Accountability: Make platforms more accountable for quickly identifying, flagging, and removing deepfake content.
- Digital Literacy Campaigns: Equip users with skills to detect misinformation and educate them on reporting mechanisms.
- Judicial Guidelines: Courts should develop jurisprudence on deepfake harms by drawing from comparative constitutional principles, especially privacy and dignity.
Conclusion
Deepfakes represent a technological marvel with tremendous creative potential but an equally formidable capacity for harm. The law, as it stands, is ill-equipped to deal with the multidimensional risks posed by synthetic media. India must act swiftly and smartly—by developing a nuanced legal framework, empowering users, and holding platforms accountable—to prevent a future where truth becomes indistinguishable from fiction.
As synthetic media becomes mainstream, the real challenge will not be about eliminating deepfakes entirely but ensuring that their use is ethical, consensual, and lawful.
References
- S. Puttaswamy V. Union of India, (2017) 10 SSC 1
- Shreya Singhal V. Union of India, (2015) 5 SSC 1
- IT Act, 2000
- DEEPFAKES Accountability Act, H.R. 3230- 119th Congress (2019-2020)
- European Union Artificial Intelligence Act (2021 Draft)