Published On: April 18th 2026
Authored By: Rabiya Parveen
Law Centre-II, University of Delhi
Abstract
With the emergence of deepfake technology — which enables the construction of highly realistic yet entirely artificial audio-visual content through advances in artificial intelligence — criminal law confronts one of its most formidable contemporary challenges. Unlike traditional modes of falsification, deepfakes undermine longstanding conceptions of identity, authorship, and evidentiary reliability, thereby complicating legal responses to deception, reputational harm, fraud, and non-consensual intimate imagery. In India, the regulation of deepfake-related harms depends on existing provisions of the Information Technology Act and general principles of penal law, neither of which was designed to address synthetic media manipulation. This article argues that while such provisions may occasionally be invoked, the current framework is conceptually disjointed and technologically misaligned. Through an analysis of the nature of deepfake harms, the limits of applicable offence doctrines, and emergent evidentiary and enforcement challenges, the article evaluates the sufficiency of India’s criminal law regime and advocates for principled, technology-attentive reform.
I. Introduction
Technological advances have repeatedly compelled legal systems to reconsider their foundational premises, and few developments have posed as deep a challenge as deepfake technology. Deepfakes — artificially generated or algorithmically altered audio-visual content — can produce highly convincing depictions of individuals engaging in speech or conduct that never occurred. Their disruptive legal significance lies not merely in their capacity to deceive, but in their ability to destabilize the basic assumptions upon which legal reasoning and adjudication have historically rested: the credibility of perception, the continuity of identity, and the reliability of evidence.[1]
Criminal law, as a regulatory mechanism, developed to address human-driven misrepresentation, fraud, defamation, and forgery.[2] Its conceptual architecture presupposes identifiable actors, material instruments of deception, and comparatively stable distinctions between authentic and artificial artifacts. Deepfakes challenge these assumptions by generating synthetic representations that are difficult to distinguish from reality, that can be disseminated across digital platforms at scale, and that can produce immediate and potentially irreversible harm. The legal questions arising from this technology extend well beyond conventional problems of technological misuse — they reach the deeper questions of how culpability is established, how evidence is authenticated, and whether traditional offence structures remain adequate.
India’s legal responses to deepfake-facilitated harms have been largely derivative, relying on statutory provisions of the Information Technology Act and doctrines of general penal law. While certain offences — relating to impersonation, obscenity, defamation, and cheating — may appear adaptable in scope, their application to deepfakes frequently demands substantial interpretive extension.[3] This raises a critical analytical question: does the current criminal law framework genuinely accommodate the distinctive harms of synthetic media manipulation, or does it offer only partial and conceptually strained solutions?
This article argues that while existing provisions occasionally succeed in capturing discrete instances of deepfake-related misconduct, the overall framework is conceptually and technologically incoherent. The misalignments manifest across doctrinal clarity, evidentiary standards, enforcement capacity, and normative objectives. The article examines the sufficiency of India’s existing law by analysing the nature of deepfake harms, the design and limits of applicable provisions, and the pathways toward coherent and principled legal reform.
II. Deepfakes as a Unique Legal Harm
Deepfakes present a category of harm that resists easy classification within traditional criminal law. Unlike conventional falsifications — which typically involve distorting an existing document, impersonating a human actor, or publishing misleading information — deepfakes enable the artificial fabrication of events, speech, and identities. This shift from modification to fabrication carries significant legal implications. While historical criminal law concepts were built around recognizable forms of deception grounded in tangible media — a forged document, a false statement, a physical impersonation — deepfakes operate through algorithmic simulation and, in many instances, lack a directly analogous antecedent in existing doctrine.
Among the most immediate harms associated with deepfakes is identity distortion. Synthetic media can portray individuals as saying or doing things that bear no relationship to reality, thereby compromising personal autonomy and reputation.[4] This is not merely a case of falsehood — it is a manipulation of perceived reality itself. Even where a deepfake’s inauthenticity is eventually established, victims may suffer lasting reputational, professional, and psychological harm. These injuries are compounded by the scalability and virality of digital platforms, which can propagate harmful content at a scale far exceeding the reach of any legal intervention.
Deepfakes also generate novel forms of fraud and deception.[5] Traditional offences of cheating and impersonation presuppose human agency — a person actively misrepresenting themselves or another. Deepfake technology enables automated, decentralized, and anonymously generated synthetic impersonation, embedding deception within the media itself rather than in verbal or written representation. This complicates the attribution of intent, authorship, and culpability — all of which are foundational to criminal liability.
Deepfakes also pose systemic threats to the integrity of legal proceedings. Audio-visual material has historically occupied a privileged evidentiary status, often being treated as a reliable indicator of truth.[6] The proliferation of synthetic media undermines this presumption, raising the prospect of fabricated evidence, political disinformation, and strategic invocation of plausible deniability. This is therefore an epistemic problem as much as an individual harm: deepfakes impair the processes through which legal systems establish facts.[7]
Taken together, these considerations suggest that deepfakes represent a legally cognizable category of harm in their own right. Their technological mediation, capacity for mass dissemination, and ability to fabricate reality impose burdens on doctrinal structures that were designed to address materially different wrongs.
III. The Existing Criminal Law Framework in India
India’s current approach to deepfake-related harms relies on interpretive applications of existing statutory provisions rather than any deepfake-specific legislative text. This reliance reflects a broader structural tendency in criminal law, whereby novel forms of misconduct are initially accommodated through doctrinal adaptation. The adequacy of such adaptation, however, must be critically examined in light of the conceptual and technological features that distinguish deepfakes from conventional deception or falsification.
Several provisions of the Information Technology Act, 2000 are potentially applicable to synthetic media.[8] Offences relating to identity theft, impersonation, and cheating by personation may, at least on the surface, capture the use of deepfakes to impersonate individuals for deceptive purposes.[9] Provisions governing the publication of obscene or sexually explicit content may similarly apply to deepfake-generated intimate imagery.[10] However, these provisions were designed around concerns of unauthorized access, data misuse, and the transmission of prohibited material — not the synthetic construction of reality. Applying them to deepfakes therefore requires considerable doctrinal stretching, raising doubts about normative coherence.
Additional avenues for prosecution exist under the general penal law doctrines of the Indian Penal Code, 1860 and, following the 2023 legislative transition, the Bharatiya Nyaya Sanhita. Deepfake-facilitated misconduct may overlap with offences such as cheating, defamation, forgery, criminal intimidation, and crimes against dignity and modesty.[11] Yet these doctrines were fashioned around assumptions of human agency, physical instruments of deception, and recognizable acts of falsification. Forgery provisions, for instance, traditionally contemplate the alteration or fabrication of existing documents or records, whereas deepfakes are synthetic constructs that have no authentic original to falsify.[12]
The resulting framework is fragmented and indirect. Liability attaches not to the synthetic nature of the content but to its consequences or outcomes. This design is prone to uneven application, interpretive inconsistency, and enforcement difficulties. The absence of a legislative instrument specifically addressing deepfakes as a distinct regulatory concern reflects the limitations of relying exclusively on doctrinal adaptation.
IV. Doctrinal Misalignments and Conceptual Gaps
The central challenge of regulating deepfake harms under India’s current criminal law is not the absence of potentially applicable provisions, but rather the conceptual incompatibility between those provisions and the process of synthetic media fabrication. Traditional criminal law doctrines were constructed to address misconduct grounded in human agency, material falsification, and relatively stable distinctions between genuine and counterfeit artifacts. Deepfakes challenge all of these assumptions and thereby expose structural limitations within the doctrinal framework.
A foundational difficulty is the absence of a stable legal definition or classification of deepfakes. Without legislative recognition of synthetic media as a distinct category of potentially harmful content, adjudication must proceed on an analogical basis. Courts and enforcement agencies are required to map deepfake-enabled harms onto offences designed around materially different phenomena — forgery, impersonation, or defamation — each of which carries different definitional requirements and fault elements. This dependence on doctrinal analogy risks interpretive inconsistency and normative incoherence, particularly where the applicable provision fails to capture the essential nature of the wrong.
The incompatibility is particularly acute in the context of forgery doctrine.[13] Historically, forgery has presupposed the reproduction or manipulation of a document or record that purports to have an authentic original. Deepfakes, by contrast, involve the synthetic production of audio-visual content that may have no authentic source at all. The deception inheres in the fabrication of reality, not in the falsification of a document — making the application of forgery provisions both analytically strained and normatively unsatisfying. Similar constraints apply to impersonation offences, which traditionally presuppose a human actor actively misrepresenting their identity.[14] Deepfake technology enables synthetic personification without any identifiable human actor engaged in active misrepresentation, thereby diffusing traditional conceptions of agency and intent.
Defamation law presents analogous limitations.[15] While reputational harm caused by deepfakes may fall within the doctrine’s scope, defamation law was not designed to address the particular features of algorithmically generated falsehood — including mass scalability, anonymous authorship, and the inherent evidentiary complexity of proving synthetic origin.
These doctrinal tensions reflect a larger structural problem: existing criminal law provisions regulate harmful outcomes but do not address the synthetic processes by which those outcomes are produced. The framework is reactive, fragmented, and conceptually strained in the face of technologically mediated falsification.
V. Evidentiary and Procedural Challenges
Deepfake technology introduces challenges that extend beyond substantive criminal liability into the evidentiary foundations of legal adjudication. The integrity of criminal proceedings depends on the reliability, authenticity, and verifiability of evidence. Audio-visual material has historically enjoyed significant evidentiary weight, frequently being treated as a reliable representation of actual events. The emergence of synthetic media capable of convincingly replicating real individuals and events undermines this evidentiary confidence and generates new procedural vulnerabilities.
A central challenge concerns the authentication of electronic evidence.[16] Existing evidentiary frameworks proceed on the assumption that digital records, even if susceptible to tampering, leave forensically detectable traces of manipulation. Deepfakes complicate this assumption by generating synthetic content that may be indistinguishable from authentic recordings by conventional detection methods, or that may require highly specialized forensic analysis to evaluate. The legal system’s capacity to distinguish genuine recordings from synthetic fakes thereby becomes contingent on technological availability and forensic expertise — both of which vary considerably across jurisdictions and cases.
These challenges intersect with questions of burden of proof.[17] Where a party challenges the authenticity of audio-visual evidence on the ground that it constitutes a deepfake, courts must balance the risks of admitting fabricated evidence against the risks of undue evidentiary exclusion. Deepfakes thus create a form of evidentiary uncertainty that can be strategically exploited — enabling bad-faith denials and the construction of false defences.
There is also a significant concern for procedural fairness. The adjudication of deepfake authenticity disputes may require expert testimony, technical investigation, and resource-intensive forensic procedures, thereby increasing the duration and complexity of litigation. This has direct implications for access to justice, particularly where parties lack the financial means to commission sophisticated forensic analysis. The adversarial process, historically structured around testimonial and documentary contestation, must increasingly confront technologically mediated authenticity determinations for which it was not designed.
VI. Enforcement and Regulatory Limitations
Beyond doctrinal and evidentiary concerns, deepfake-related harms expose significant enforcement and regulatory limitations within India’s criminal justice system. Digital harms are inherently fast-moving, large-scale, and cross-jurisdictional, frequently outpacing the investigative and adjudicatory capacities of traditional institutions. Deepfakes can cause instantaneous and enduring harm upon digital dissemination, while legal responses remain slow, resource-constrained, and institutionally limited.
Attribution presents one of the most serious structural challenges.[18] Tracing the creator or initial distributor of a deepfake is inherently difficult in an environment characterized by anonymity, encrypted communication, and transnational information flows. Traditional investigative methods, premised on physical traces of criminal activity, face substantial practical constraints when applied to decentralized digital misconduct.
Jurisdictional complexity further compounds enforcement difficulties. Deepfake content may be created, stored, hosted, and consumed across multiple jurisdictions, making the application of domestic criminal provisions uncertain.[19] Mechanisms of mutual legal assistance and cross-border cooperation are often procedurally cumbersome and temporally ill-suited to cases of rapidly disseminating digital harm.
The current regulatory framework also fails to adequately address the role of digital intermediaries.[20] While platforms serve as primary channels of distribution, applicable liability frameworks must balance competing interests of innovation, expressive freedom, and harm prevention. The absence of deepfake-specific regulatory obligations creates accountability gaps regarding platform responsibility for detection, removal, and victim remediation.
These enforcement constraints reflect a deeper institutional problem: the criminal justice system, even where it possesses the legal authority to act, may frequently be unable to deliver timely or effective responses to technologically accomplished harms. Deepfakes thus reveal not only legislative gaps but also fundamental limitations in the governance of digital misconduct.
VII. Towards a Coherent Legal Response
Addressing the regulatory challenges posed by deepfakes requires a legislative and institutional approach that moves beyond piecemeal doctrinal adaptation toward genuine conceptual and technological integration. While interpretive extension of existing criminal liability may provide short-term responses, sustained reliance on such tools produces inconsistency, overextension, and normative ambiguity. A coherent legal response requires recognition of synthetic media manipulation as a legally distinct category of harm — not merely a variant of existing criminal offences.
One avenue for reform would be the enactment of express statutory provisions defining deepfakes or artificially generated media, grounded in both the technological processes of synthetic creation and the legally cognizable harms they produce.[21] A harm-centred approach would enable regulation to target the deceptive, injurious, and rights-violating uses of synthetic media while preserving legitimate creative and technological applications. This would align more closely with the foundational premises of criminal law, which ground liability in harm and culpable conduct rather than in the medium or technology involved.
Reform should also address evidentiary standards to account for authenticity disputes involving synthetic content.[22] Frameworks for expert evidence, forensic authentication procedures, and procedural safeguards can mitigate the risks of wrongful conviction, fabricated defences, and unreliable evidence. Equally, clear regulatory obligations on digital intermediaries — particularly regarding detection, timely removal, and victim protection — would enhance enforcement effectiveness without imposing disproportionate burdens on platform operators.
VIII. Conclusion
Deepfake technology constitutes a serious stress test for traditional criminal law systems, exposing conceptual, evidentiary, and enforcement limitations that cannot be resolved through interpretive extension alone. While certain provisions of Indian law may be invoked in relation to specific deepfake-related harms, their application is indirect and doctrinally strained. The resulting framework is fragmented, reactive, and ill-calibrated to the distinctive features of synthetic media manipulation.
The challenges identified in this analysis are not primarily technological — they are structural. Deepfakes erode the legal premises of authorship, identity, falsity, and evidentiary reliability upon which criminal adjudication depends, making the establishment and contestation of liability increasingly complex. These difficulties reflect the inadequacy of doctrinal frameworks designed to address materially different forms of wrongdoing and demand conceptual clarity in the legal response.[23]
A coherent regulatory strategy requires not only the formulation of new offence categories, but an intellectual re-evaluation of how criminal law conceptualizes deception, harm, and authenticity in digitally mediated environments. Harm-focused legislative recognition, evidentiary reform, and clarity in intermediary liability regulation are pathways toward a more stable and principled doctrinal foundation. As synthetic media technologies continue to evolve, the credibility and effectiveness of criminal law will depend on its capacity to adapt without sacrificing coherence, precision, or the fundamental requirements of justice.
References
[1] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[2] Indian Penal Code, No. 45 of 1860, INDIA CODE (1860).
[3] Information Technology Act § 66C, No. 21 of 2000, INDIA CODE (2000); Information Technology Act § 66D; Indian Penal Code § 499.
[4] Subramanian Swamy v. Union of India, (2016) 7 SCC 221 (India).
[5] Indian Penal Code § 415; Information Technology Act § 66D.
[6] Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473 (India).
[7] Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, (2020) 7 SCC 1 (India).
[8] Information Technology Act, No. 21 of 2000, INDIA CODE (2000).
[9] Information Technology Act § 66C; Information Technology Act § 66D.
[10] Information Technology Act § 67; Information Technology Act § 67A.
[11] Indian Penal Code §§ 415, 463, 499.
[12] Indian Penal Code § 463.
[13] Indian Penal Code § 463.
[14] Information Technology Act § 66D.
[15] Indian Penal Code § 499; Subramanian Swamy v. Union of India, (2016) 7 SCC 221 (India).
[16] Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473 (India); Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, (2020) 7 SCC 1 (India).
[17] Indian Evidence Act, No. 1 of 1872, INDIA CODE (1872).
[18] Information Technology Act § 75.
[19] Information Technology Act § 1(2); Information Technology Act § 75.
[20] Information Technology Act § 79; Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
[21] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[22] Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, (2020) 7 SCC 1 (India).
[23] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).




