DEEPFAKES AND MISINFORMATION: LEGAL CHALLENGES IN THE DIGITAL AGE

Published On: February 3rd 2026

Authored By: Joy Mercy
Chettinad School of Law

ABSTRACT:

The rapid development of artificial intelligence has led to the advent of deepfake technology, which is capable of creating fake audio and video content with remarkable realism. Deepfakes are a tool of fiction, but their misuse can challenge the law, ethics, and society, especially where no specific regulations exist. The increasing prevalence of deepfakes in India for creating fake news, impersonation, non-consensual nudity and defamation, among others, has revealed major loopholes in the current legal system. At present, Indian authorities are relying on fragmented provisions of the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, 2023, along with the data protection and intermediary regulations, which do not provide any direct solutions to deepfake technology and its specific harms. This doctrinal study investigates the inability of the current Indian legal framework to deal with deepfake-related crimes and points out the difficulties caused by unclear definitions, lack of specific offences, and no enforcement mechanisms. The paper reviews peer-reviewed legal literature, statutory provisions, and court interpretations to show that the existing laws are still reactive and inadequate in handling deepfake wrongful acts. The authors also state that the absence of specific laws weakens the protection of victims, promotes ambiguities in enforcement, and reduces the level of digital trust. The research underscores the importance of a dedicated legal framework that would unambiguously explain deepfakes, control their misuse, and then strike a balance between the privacy, consent, and dignity protections in the digital era and the technological innovation.

ANALYSIS:

Deepfake technology, or rather the capacity of AI to generate synthetic audio-visual media that can produce extremely realistic yet fake representations of personages, has brought about unique legal problems in the digital era. The technology, though misused, has found applications in the film industry, research, and education; however, its wrongful use has brought about identity theft, defamation, political manipulation, financial fraud, and mistrust in digital information. Indian legal academia constantly points out that the current legal systems are unfit and split up, with no precise laws to tackle deepfake crimes efficiently. The paper discusses the legal and moral shortcomings in Indian law as pointed out in five peer-reviewed journal articles and puts forward the reform’s doctrinal insights.

  1. Inadequacy of the Current Legal Framework

The article in the Indian Journal of Law by Abhay Jain points out the very issue: the present legal system of India does not either explicitly define or govern deepfakes[1]. The Information Technology Act, 2000 (IT Act) and the associated cybercrime provisions were created way before synthetic media came into being. As a result, deepfakes are treated under the general category of cyber crimes, such as identity theft or defamation, which means that they do not get the legal recognition and regulation that they actually need.

The author mentions that although deepfakes come up with new harms like very lifelike impersonation and misleading information the IT Act still treats technology as indifferent and not able to help providing precise legal responses to the harms. For example, the IT Act’s sections 66C and 66E, which deal with identity theft and invasion of privacy among others, are applied by analogy without specifically mentioning AI synthesis, which results in gaps in interpretation and uncertainty in enforcement. The writer claims that this fragmented method inhibits the likelihood of obtaining justice and does not scare the wrongdoers away. 

  1. Ethical and Legal Vacuum Surrounding Deepfake Crimes

The IJLMH article Deceptive Realities supports this conclusion and at the same time points out the ethical and legal vacuum surrounding deepfakes through the lack of explicit legal provisions targeting them[2]. The lines between creativity and deception are blurred with deepfakes as intent and mode of execution are still well-defined in traditional cybercrimes. This uncertainty hinders the application of the current laws which are meant for crimes based on mens rea (criminal intent) and observable behaviour.

Deepfakes in the form of slander, political disinformation, or revenge porn show how the current legal framework takes a poor and limited position between technological advancement and legal interpretation. The paper criticizes India’s dependence on the general cyber laws and the Bharatiya Nyaya Sanhita, 2023 (BNS) without clearly mentioning synthetic media. As a matter of fact, when it comes to criminal prosecution the law usually falls back on conventional offenses like defamation or harassment, which are inadequate to deal with the new harms brought about by deepfakes.

The ethical questions are not limited only to legality. Deepfakes pose dilemmas regarding self-determination, consent, and personal integrity—issues that the present criminal laws do not consider adequately. The writers within the context of privacy and dignity, which are constitutionally guaranteed, advocate for a rights-based regulatory framework that is not confined to the mechanical application of outdated statutes. 

  1. The subject of Identity Fraud and Deepfake Technology

A paper by Preksha Singh published in the International Journal of Civil Law and Legal Research not only presents the connection of deepfakes with AI fraud as a possible source of dark web, but also deepens the discussion on it[3]. The research depicts the role of deepfakes in making possible the large-scale cybercrime systems where synthetic identities are traded on encrypted platforms. In India, the digital governance which is largely based on the Aadhaar biometric system along with the billions of UPI transactions made, has risked the area of overlap between biometric data breaches and deepfake technology.

Singh points out that the IT Act 2000, which is still in force, and the DPDP Act, 2023, which was recently passed, both fail to recognize liability in the cases of algorithmically generated identity fraud and to classify the crimes of purely synthetic media as criminal at the same time. This creates a legal vacuum that is taken advantage of by criminals who make use of AI tools to create synthetic identities and commit financial and political scams without having the legal option of being clearly recognized as such. The law on identity theft, for instance, requires that the fraud committed involves real personal information and not an entirely synthetic profile.

The authors of this paper report that the use of shadow internet and the creation of hidden markets through blockchain have made it even more difficult to enforce laws, pointing out to the fact that the already existing problems of jurisdiction and technological anonymity are further complicating the tracing and legal actions against deepfake fraud in India. The gaps thus created are sending a clear signal of the necessity for precise definitions and legislative creativity to control the and the violences related to AI-generated media. 

  1. Criminal Law and Deepfake Evidentiary Challenges

The International Journal of Law and Modern Technology had published an article on the subject of Deepfake Technology in Criminal Law that included a legal ramifications analysis which highlighted the deepfakes impacts on evidence and criminal justice processes[4]. The paper mentions the classic evidence doctrines which were created prior to the advent of digital manipulation and that these doctrines are incapable of facing the challenges posed by the authenticity and reliability of deepfake evidence. The courts impose a requirement of proof for authenticity through the chain of custody and digital certificates as per the Evidence Act under Section 65B in criminal trials. However, it is still possible to create and modify deepfakes without leaving any reliable forensic trails, which makes the process of authentication even more difficult and might result in both wrongful convictions and wrongful acquittals.

Moreover, the rapid AI tools development indicates that what was regarded as authentic evidence yesterday might be easily counterfeited tomorrow. This technological rivalry between the improving of forensic detection techniques and the production of synthetic media is harmful to the digital evidence and thus it is a big challenge for the prosecutors, defense attorneys, and judges all at once. The study claims that the absence of specific regulatory standards for deepfake evidence leads to judicial reluctance and inconsistent verdicts. 

  1. Regulatory and Rights-Based Gaps in Misinformation Law

One of the most comprehensive analyses of misinformation in the age of AI is the IJLLR article on Amazon’s projected digital asset, regulating deepfakes in India: a legal and ethical analysis, which clearly highlights the deficiencies of the current legal tools as ones that are mainly reactant[5]. The legal framework, especially the IT Act, and IPC, is indeed a very reactive one and does not have the capability of giving a unitary answer for the problem of Disinformation through the use of deepfake technology.

The writers suggest that not only are the deepfakes a violation of people’s privacy and consent but they also constitute a danger to democracy, national security, and people’s trust in digital information. It is observed that indirect regulations such as AI content labeling or compulsory watermarking are being tried out in some jurisdictions but no such comprehensive reforms have been witnessed in India. The absence of such measures might make the country slow in its response to digital harms and the constitutional values of freedom of expression (Article 19) and privacy (Article 21) would be left unprotected.

What is remarkable is the article’s suggestion of a rights-based regulatory model. The model’s objective is to cater for the needs of the industry while still having a safety measure against digital violence. One of the proposed regulations would be an obligation on the part of the platforms to identify, label, and eliminate the deepfake content that is harmful and at the same time a person whose dignity has been affected must not be stifled through measures of detection, marking, and removal of speech that has not been legal through a chilling effect.

SYNTHESIS OF GAPS IDENTIFIED

Significant doctrinal gaps are apparent in these five sources: 

  • Lack of Specific Legal Definition

Currently, deepfake and synthetic media are not explicitly defined by any of the Indian laws. Thus, it remains unclear how to approach prosecution of deepfake crimes based on the general provisions of cyber or criminal law.

  • Inadequate Criminal Offences

The existing provisions are scattered and reactive in nature. There is not yet a specific deepfake offense that would clarify the behaviour , intent, and penalties involved. Cybercrime laws conduct their operations based on interpretation and do not reflect the actual scenario of AI-generated fakes.

  • Platform Accountability and Intermediary Liability

The prevailing regimes of intermediary liability under the IT Act impose very few obligations on the platforms. There is no explicit imposition of proactive measures for the detection, labeling, or removal of deepfakes within a certain timeframe, which makes the enforcement mechanisms weaker.

  • Jurisdictional and Enforcement Challenges

The incidents of deepfake crimes often involve actors from different countries and the use of technologies such as blockchain and dark web services, which are difficult to trace and monitor, thus weakening India’s capacity for law enforcement and international cooperation.

  • Evidentiary and Forensic Hurdles

The use of deepfake as evidence poses a challenge to the traditional criteria of authentication and reliability, which calls for the introduction of new forensic practices and judicial directives.

  • Ethical and Rights Dimensions

Deepfakes raise the issues of consent, privacy, and dignity, thus bringing up ethical and constitutional concerns that are not sufficiently dealt with in the current laws.

SUGGESTIONS:

Drawing from the theological understanding of the five sources, the following recommendations are put forward:

  1. Enact Dedicated Deepfake Legislation

India should pass a dedicated law (or update the IT Act) that:

  • Gives a precise and narrow definition of deepfakes and synthetic media.
  • Makes it a crime to produce and distribute deepfakes that are destructive and harmful.
  • Establishes fines that accurately reflect the degree of harm done.
  • This move would create a ground of law that is certain and allows for gradual arrest of authority that targets a specific group of people.
  1. Modernize Forensic and Evidentiary Standards

The Indian law of evidence should be reformed in such a manner that:

  • It outlines the features of the digital media that will be authenticated as synthetic.
  • Courts are given teaching devises capable of guiding them through the whole process of admitting deepfake evidence.
  • The forensic departments will have their power to tell apart AI-created content broadened.
  • Thus, the public will lose trust in the court and the digital proof will not be regarded as reliable anymore.
  1. Enhance Institutional Capacity

The police and other law enforcement agencies will have to:

  • Obtain the services of an AI-forensics unit that specializes in the field.
  • Be trained in the use of tools for deepfake detection.
  • Participate in devising protocols for international cooperation in the fight against cross-border cybercrime.
  • This would help to speed up the investigations and improve the chances of successful prosecutions.
  • Deepfake detection means a toolkit, and you are all set to work with it.
  • Being part of consultations regarding protocols that would allow international cooperation in the war against cross-border cybercrime, would be a major step forward.
  • Your collaboration would be highly valuable in terms of the time factor in investigations and the success rate in prosecutions.
  1. Digital Literacy and Public Awareness

The state and schools should:

  • Organize sessions to warn people about the dangers of deepfake.
  • Help people become digitally literate and be able to critique online content with a critical mind.
  • Make people aware of the wrong and right uses of AI technologies.
  • Misinformation cannot penetrate the public if it is first educated about it. Public awareness is the key to this intensive resistance.

CONCLUSION:

A review of the literature on laws shows that the existing regulatory structure in India is incapable of managing the complex injuries caused by deepfake technology. Although the Information Technology Act, the Bharatiya Nyaya Sanhita, and the Digital Personal Data Protection Act offer limited help, they do not actually regulate deepfakes due to undefined terms, lack of laws specifically targeting the technology, and an overall unawareness of its impact.

The non-existence of a suitable legal structure has resulted in a scenario where enforcement is ambiguous, the deterrent effect is scant, the judiciary is giving varied responses, and at the same time the victims of cybercrimes are still getting very little protection. The use of deepfakes is virtually on the borderline of leading to wrongful representations, thus raising new legal issues which the current legislation could not tackle. Such a growing concern is forcing the regulators to find a way to co-exist with the freedom of speech principle while simultaneously ensuring the protection of rights such as privacy, dignity, and social peace. When there is no clear law, the judiciary and police have no choice but to use old legal rules to handle new digital problems. This method is a waste of time and resources in the long run. If there is no delay in legal reform, the technology of deepfake will be misused more and more and will lead to the public’s mistrust of digital media and the unauthorised use of people’s identities for manipulation. Hence, a very responsive and advanced legal system is needed to protect people’s rights and assure trust in the digital world.

 REFERENCES

[1] Abhay Jain, Deepfakes and Misinformation: Legal Remedies and Legislative Gaps, 3 Indian J. of L. 23 (2025) Deepfakes and Misinformation: Legal Remedies and Legislative Gaps | Indian Journal of Law

[2] Adyasha Behera & Bhanu Pratap Singh, Deceptive Realities: India’s Legal and Ethical Framework Against Digital Forgeries and Deepfake Crimes, 7 Int’l J. of L. Mgmt. & Humanities 2211 (2024) https://ijlmh.com/wp-content/uploads/Deceptive-Realities-Indias-Legal-and-Ethical-Framework-Against-Digital-Forgeries-and-Deepfake-Crimes.pdf

[3] Preksha Singh, Deepfakes, Identity Theft, and the Dark Web: Legal Gaps in AI-Generated Fraud, an Indian Perspective, 5 Int’l J. Civ. L. & Legal Rsch. 103 (2025) https://www.civillawjournal.com/article/148/5-2-17-526.pdf

[4] Asim Mustafa Khan, Regulating Deepfakes in India: A Legal and Ethical Analysis of Misinformation in the Age of AI, Indian J. of L. & Legal Rsch. (IJLLR) (2025) https://www.ijllr.com/post/regulating-deepfakes-in-india-a-legal-and-ethical-analysis-of-misinformation-in-the-age-of-ain

[5] Aditya Pratap Singh, Legal Implications of Deepfake Technology in Criminal Law, 8 Int’l J. of L. Mgmt. & Humanities 1645 (2025) https://ijlmh.com/paper/legal-implications-of-deepfake-technology-in-criminal-law/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top