Published on: 05th January 2025
Authored by: Sumayyah A. Abdulhameed
Fountain University, Osogbo, Osun State. Nigeria
Introduction
In today’s digital age, seeing is no longer believing. Artificial intelligence (AI) has enabled the creation of deepfakes; hyper-realistic audio, image, or video content that impersonates real people. These AI-generated materials have evolved from simple entertainment or cinematic tools into powerful instruments that can disrupt political processes, damage reputations, facilitate fraud, and violate personal dignity. For instance, deepfake videos have been used to manipulate elections in the United States and India, circulate non-consensual sexual content targeting women, and impersonate executives for corporate fraud. Globally, the prevalence of deepfakes is increasing; a 2024 report by Deeptrace estimated that the number of deepfake videos online doubled between 2022 and 2024, with over 90% targeting women.[1]
Despite the growing scale and sophistication of deepfakes, law enforcement and legal systems remain largely reactive. Traditional privacy, defamation, and intellectual property laws were not designed for synthetic media, leaving victims under-protected and perpetrators often unpunished. Deepfakes represent an unprecedented challenge to personal dignity, freedom of expression, electoral integrity, and international security. This article argues that existing legal frameworks are insufficient and explores how international, national, and technological responses can be harmonized to protect rights while fostering innovation.
Understanding Deepfakes: The Technology and the Threat
Deepfakes are generated using advanced machine learning techniques, particularly Generative Adversarial Networks (GANs). A GAN consists of two AI models; a generator that creates synthetic data and a discriminator that evaluates its authenticity. Through iterative training, these networks produce content that is nearly indistinguishable from reality.[2] The technology now allows users with minimal technical knowledge to create videos where someone appears to speak words they never uttered, or images showing them in compromising situations.
While deepfakes can serve artistic or educational purposes, their misuse raises severe legal and ethical concerns. Non-consensual sexual content, fake political speeches, fraudulent impersonation, and corporate sabotage are among the most harmful applications. Compounding the problem, deepfakes are highly scalable and difficult to attribute. A single malicious video can be uploaded to multiple platforms, shared worldwide, and modified to evade detection. Detecting and proving the authenticity of such content remains a technical and legal challenge. Even the EU’s AI Act struggles to define “deepfake” in a legally precise manner, highlighting the difficulty regulators face in keeping pace with rapidly evolving AI technologies.[3]
Legal Challenges Posed by Deepfakes
- Privacy and Consent Violations
Non-consensual sexual deepfakes disproportionately target women, threatening personal privacy, bodily autonomy, and reputation. Existing privacy laws often require physical acts or tangible harm, leaving victims of synthetic abuse in a legal grey zone.[4] Laws like the EU’s General Data Protection Regulation (GDPR) offer some protections for biometric data, but application to AI-generated likenesses remains ambiguous. In India, the IT Act and proposed Digital Personal Data Protection Act provide partial coverage but lack deepfake-specific provisions.[5]
- Defamation and Identity Misuse
Deepfakes can falsely depict individuals engaging in illegal or socially unacceptable acts, causing reputational harm. Traditional defamation law requires falsity, publication, and harm. However, deepfakes complicate this process: attribution is unclear, the content may be rapidly replicated, and determining intent can be difficult. In 2023, a deepfake video of a public figure circulated online, falsely showing them endorsing controversial statements, raising complex legal questions about the liability of platforms versus content creators.[6]
- Intellectual Property and Personality Rights
AI-generated content frequently uses the likeness, voice, or gestures of individuals without consent. Many jurisdictions do not yet explicitly protect synthetic reproductions under copyright or personality rights. The EU is developing legislation recognizing the right to one’s likeness and voice, while the U.S. has limited state-level statutes. Without comprehensive legal protections, individuals are unable to prevent misuse or seek adequate remedies.[7]
- Criminal and Evidentiary Issues
Deepfakes have facilitated new forms of criminal activity, including election manipulation, phishing, blackmail, and financial fraud. Courts must determine authenticity and authorship, but forensic standards for AI-manipulated content are underdeveloped. Traditional evidence law struggles with synthetic content, often requiring expert testimony and advanced verification methods, which may delay justice or limit accountability.[8]
The Global Regulatory Landscape
United States
U.S. regulation is largely state-based, with laws targeting non-consensual sexual deepfakes, election interference, and defamation. For example, California Penal Code § 1708.85 criminalizes distributing non-consensual sexual deepfakes, while Texas prohibits political deepfakes within 30 days of an election.[9] The federal Take It Down Act 2025 extends protections by criminalizing non-consensual intimate imagery, including AI-generated deepfakes, and mandates rapid platform takedown procedures.[10]
European Union
The EU has led global regulation with the AI Act (Regulation (EU) 2024/1689) and the Digital Services Act (DSA). The AI Act requires transparency for high-risk AI systems, including deepfake generation, while the DSA mandates swift removal of illegal content from platforms. Denmark has proposed laws granting individuals copyright over their face, voice, and body to combat misuse of synthetic content.[11]
Asia
India lacks deepfake-specific legislation but relies on the IT Act, which covers harassment, obscene content, and data misuse. China has proposed AI content regulations focusing on accountability for misinformation and online fraud, reflecting the global concern about synthetic media in authoritarian and democratic contexts alike.[12]
Cross-Border Enforcement
Deepfakes are global in reach, and national laws alone cannot fully address their harms. Transnational cooperation, mutual legal assistance treaties (MLATs), and harmonized AI regulations are needed to prevent jurisdictional loopholes for perpetrators operating across borders.[13]
Towards a Regulatory Framework for Deepfakes
A comprehensive response requires legal, technological, and societal measures.
- Legal Recognition of Harm
Statutes should explicitly define synthetic impersonation and criminalize malicious deepfake creation. Laws must balance criminalization of harmful content with protection of freedom of expression and artistic use.[14]
- Platform Accountability
Online platforms should be legally obligated to detect, label, and remove deepfake content swiftly. Watermarking, metadata tagging, and transparency reports enhance accountability, following models in the EU and U.S.[15]
- Civil and Criminal Remedies
Victims should have access to civil remedies (privacy, defamation, and personality rights) alongside criminal prosecution. Courts must adapt evidentiary rules to handle AI-generated content, allowing expert forensic analysis without imposing undue burdens on victims.[16]
- Human Rights and Gender Lens
Women and vulnerable groups are disproportionately affected by non-consensual sexual deepfakes. Legal frameworks should recognise such misuse as gender-based violence and ensure survivor-centred reporting mechanisms and protections.[17]
- Technological Safeguards and Education
Watermarking, detection algorithms, and metadata standards are crucial. Additionally, public education campaigns can raise awareness about synthetic media, improving societal resilience against manipulation.[18]
- International Collaboration
International organisations, such as the UN and Council of Europe, should coordinate cross-border regulatory strategies. Sharing technical expertise, legal best practices, and harmonized frameworks ensures deepfakes cannot exploit jurisdictional gaps.[19]
Conclusion
Deepfakes challenge foundational assumptions about identity, evidence, truth, and democracy. Rapid technological advancement has outpaced legal and ethical frameworks, creating a vacuum that threatens personal dignity, political integrity, and societal trust. Bridging this gap requires legislative clarity, platform accountability, global cooperation, and human-rights-centred approaches. While synthetic media offers creative and educational opportunities, the stakes are high: without robust legal, technological, and social safeguards, the authenticity of reality itself is at risk. Governments, platforms, and civil society must act decisively to protect truth in the digital age.
Bibliography
- Chesney R and Citron D, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 California Law Review 1753.
- Kietzmann J and Pitt L, ‘Deepfakes: Understanding the Technology’ (2020) 63 Business Horizons 135.
- Brundage M et al, ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’ (2018) arXiv:1802.07228.
- Maras M, Deepfakes: Detection, Implications, and Regulation (Palgrave Macmillan 2022).
- Solove D, Understanding Privacy (Harvard University Press 2021).
- Waldman A, ‘Defamation in the Age of Deepfakes’ (2020) 25 Yale Journal of Law & Technology 1.
- European Commission, ‘The AI Act Explained’ (2024).
- Take It Down Act 2025 (US Federal Law).
- UN Office of Drugs and Crime, ‘Cybercrime and Emerging AI Threats’ (2023).
- UN Human Rights Council, ‘Report on Artificial Intelligence and Human Rights’ (2023).
- DesignBoom, ‘Denmark to Give Citizens Copyright over Body, Voice, and Face’ (2025).
- Lawful Legal, ‘Deepfakes and the Law: The Need for a Robust Framework’ (2024).
[1] Deeptrace, ‘The State of Deepfake Technology 2024’ (2024) https://www.deeptracetech.com/deepfake-report-2024
[2] Kietzmann J and Pitt L, ‘Deepfakes: Understanding the Technology’ (2020) 63 Business Horizons 135.
[3] European Commission, ‘The AI Act Explained’ (2024) https://digital-strategy.ec.europa.eu/en/library/ai-act-explained
[4] Solove D, Understanding Privacy (Harvard University Press 2021).
[5] IT Act 2000 (India); Digital Personal Data Protection Act 2023 (India).
[6] Waldman A, ‘Defamation in the Age of Deepfakes’ (2020) 25 Yale Journal of Law & Technology 1.
[7] Euronews, ‘Denmark Fights Back Against Deepfakes With Copyright Protection’ (2025) https://www.euronews.com/next/2025/06/30/denmark-fights-back-against-deepfakes-with-copyright-protection-what-other-laws-exist-in-e
[8] Maras M, Deepfakes: Detection, Implications, and Regulation (Palgrave Macmillan 2022).
[9] California Penal Code § 1708.85; Texas Penal Code Title 7, Chapter 33.
[10] Take It Down Act 2025 (US Federal Law).
[11] DesignBoom, ‘Denmark to Give Citizens Copyright over Body, Voice, and Face’ (2025) https://www.designboom.com/technology/denmark-pass-law-citizens-copyright-face-voice-ai-deepfakes-07-03-2025/
[12] Lawful Legal, ‘Deepfakes and the Law: The Need for a Robust Framework’ (2024) https://lawfullegal.in/deepfakes-and-the-law-the-need-for-a-robust-legal-framework/
[13] UN Office of Drugs and Crime, ‘Cybercrime and Emerging AI Threats’ (2023) https://www.unodc.org/unodc/en/cybercrime/ai-threats.html
[14] Chesney R and Citron D, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 California Law Review 1753.
[15] European Commission, ‘Digital Services Act Explained’ (2024).
[16] Waldman (n 6).
[17] Maras (n 8).
[18] Brundage M et al, ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’ (2018) arXiv:1802.07228.
[19] UN Human Rights Council, ‘Report on Artificial Intelligence and Human Rights’ (2023) UN Doc A/HRC/52/45.



