Published On: 15th January 2026
Authored By: Somya Mittal
University Institute of Legal Studies, Panjab University
Abstract
Artificial intelligence (AI) is rapidly changing the way evidence is created, analysed, and used in criminal courts. Today, tools like AI-enhanced CCTV footage, automated voice matching, and image recognition systems are becoming common in investigations, something that was almost unimaginable a decade ago. Yet, the law has not been able to keep up with these technological shifts. This article explores the major challenges that AI-generated evidence brings to criminal trials, particularly concerns about authenticity, reliability, bias, and due process. It also looks at how Indian law, especially after the enactment of the Bharatiya Sakshya Adhiniyam, 2023, deals with these issues and suggests ways in which the legal framework can be strengthened moving forward.
Introduction
The rapid rise of AI technologies has begun to reshape criminal justice systems around the world. Investigating agencies now commonly rely on tools like predictive algorithms, facial recognition software, and AI-generated crime scene reconstructions. While these innovations can certainly make investigations quicker and more efficient, they also raise important concerns about fairness and accuracy. Can we really trust evidence produced or processed by AI? How do we make sure such evidence hasn’t been manipulated or influenced by hidden biases? And perhaps the biggest question is, how should courts assess this kind of evidence under the current laws?
India’s recent transition from the Indian Evidence Act, 1872,[1] to the Bharatiya Sakshya Adhiniyam, 2023,[2] has pushed digital evidence to the forefront. However, the new law still does not fully address the special problems that come with AI-generated material. This article argues that although AI has the potential to strengthen the criminal justice system, courts must apply stricter standards of authenticity and due process to ensure that this evidence is used responsibly and does not lead to injustice.
AI-Generated Evidence: Understanding the Concept
AI-generated evidence basically refers to any digital material that is created, processed, or heavily modified by an artificial intelligence system. This can take many forms, such as:
• AI-enhanced CCTV footage
– Facial or voice recognition reports produced through algorithms
– AI-generated 3D reconstructions of crime scenes
– Predictive or forensic algorithms used during investigations
– Deepfake detection tools or other synthetic media analysis
What makes this type of evidence different from traditional forms is the way AI systems work. Many of them function like “black boxes,” where the internal steps, calculations, or logic behind the output are not easily visible or understandable. Because of this lack of transparency, AI-generated evidence raises a number of special concerns when it comes to reliability and fairness in criminal trials.[3]
Key Legal Challenges
1. Authenticity and Reliability
One of the most basic requirements for any piece of evidence is that it must be genuine and trustworthy. According to Section 63 of the Bharatiya Sakshya Adhiniyam, 2023, digital records can be admitted in court only if their authenticity is supported by a certificate from the person responsible for operating the device.[4] However, when it comes to AI-generated material, proving authenticity becomes far more complicated. This is because:
• AI tools can alter images, videos, or audio in ways that humans cannot easily detect.
– The data and methods used to train these algorithms are usually not made public.
– The output may change depending on the design of the software or even a simple update.
Indian courts have already emphasised the importance of reliable digital evidence, as seen in Anvar P.V. v. P.K. Basheer.[5] But AI-based tools bring additional uncertainties that make the verification process even more challenging.
2. Algorithmic Bias
Several studies have shown that AI systems can exhibit racial, gender, or social biases based on the data used to train them.[6] When such biased tools are used in criminal cases, they can cause serious problems, such as:
• Incorrect facial recognition matches
– Disproportionate suspicion against certain communities
– Strengthening of existing biases within policing
The criminal justice system already faces challenges related to discrimination and unequal treatment. If AI tools are not carefully examined, they can unintentionally make these problems even worse. International studies, such as the NIST Face Recognition report (2019), highlight similar risks.[7]
3. Violation of Due Process and Fair Trial Rights
The Constitution of India protects every person’s right to a fair trial under Article 21.[8] But the use of AI-generated evidence can pose risks to this right in two major ways:
Opaque Algorithms: If the defence is not allowed to see or question how an algorithm works, it becomes impossible for them to properly challenge the evidence, which goes against the idea of procedural fairness (Selvi v. State of Karnataka, 2010).[9]
Self-Incrimination: When AI tools recreate or interpret a suspect’s actions without their consent, it may interfere with the protection offered by Article 20(3) against self-incrimination.[10]
Because of these concerns, it is essential for courts to ensure that AI-generated evidence does not violate or weaken the fundamental rights of the accused.
4. Chain of Custody and Tampering Risks
Digital evidence has always been at risk of being altered, but AI technology increases this risk even more. With the rise of deepfakes, it is now possible to create extremely realistic yet completely false images or videos.[11] This makes it much harder to maintain a proper and verifiable chain of custody, especially when AI tools are used to edit, enhance, or process the original files. Ensuring that the evidence presented in court is genuine becomes a much bigger challenge in such situations (State v. Navjot Sandhu, 2005).[12]
5. Expert Testimony and Judicial Understanding
Judges and lawyers often don’t have deep technical knowledge about how AI systems actually work. Because of this, courts in many countries depend heavily on expert witnesses to explain AI-generated evidence.[13] But this approach has its own problems:
• Different experts may offer conflicting opinions.
– Companies that create these AI tools might refuse to reveal their proprietary algorithms.
– Judges might give too much importance to AI outputs simply because they appear scientific or advanced, a tendency sometimes called the “technological halo effect.”
All these factors increase the chances of errors in judgment and can even lead to wrongful convictions if the evidence is not properly understood or questioned.
Indian Legal Framework and Its Limitations
The Bharatiya Sakshya Adhiniyam, 2023, modernizes the law of evidence by recognizing electronic and digital records in a broader manner.[14] However, the law still largely assumes that digital evidence is created or verified by humans. AI-generated evidence introduces questions that the statute doesn’t clearly answer, such as:
• Who should be considered the “author” of AI-generated content?
– Can a machine itself be treated as a “witness”?
– How much information about the software or algorithm needs to be disclosed by developers?
While courts can continue to rely on traditional tests of admissibility and relevance, the unique complexities of AI evidence make it clear that a more tailored framework is needed.[15]
Comparative Perspectives
United States
In the US, courts follow the Daubert standard, which says that scientific evidence must be based on methods that are testable, peer-reviewed, and generally accepted in the field.[16] Some courts have already expressed concerns that certain AI-based forensic tools do not meet these requirements, as illustrated in State v. Loomis, 2016.[17]
European Union
The EU AI Act treats AI tools used in law enforcement as “high-risk,” meaning they must meet strict standards for transparency, auditability, and human oversight.[18]
United Kingdom
In the UK, digital forensic guidelines require investigators to carefully document every step of their analysis, including the tools or software they use, the methodology applied, and the limitations of those tools.[19]
India can take lessons from these international approaches to develop a stronger and more reliable framework for handling AI-generated evidence in its courts.
The Way Forward
1. Statutory Reforms
Parliament could consider amending the Bharatiya Sakshya Adhiniyam to address AI-generated evidence more clearly.[20] Possible reforms might include:
• Clear definitions of what counts as AI-generated evidence
– Standards for verifying and authenticating such evidence
– Rules requiring transparency about how algorithms work
– Mandatory certification by qualified experts before AI-generated evidence can be presented in court
These measures would help ensure that AI evidence is reliable and can be fairly assessed by the courts.
2. Judicial Guidelines
Courts could create their own set of guidelines, similar to the Daubert standard used in the US, to make sure that AI-generated evidence is scientifically reliable and trustworthy before it is admitted in trials.[21]
3. Independent Audits
AI tools used by law enforcement should be subject to independent audits to verify that they are accurate, free from bias, and secure.[22]
4. Defence Access
The defence should have the right to review the algorithm, the data it was trained on, and its error rates whenever an AI tool is used as evidence against an accused person.[23]
5. Training for Legal Professionals
Judges, prosecutors, and defence lawyers should be provided with training to better understand and handle digital and AI-generated evidence.[24]
Conclusion
AI-generated evidence offers both opportunities and challenges for the criminal justice system. On one hand, it can make investigations more accurate and efficient; on the other, it brings risks such as bias, manipulation, and potential violations of individual rights.[25] Even with recent reforms, India’s legal framework does not fully address these challenges.[26] This means that courts and lawmakers need to take active steps to create clearer rules for admissibility, transparency, and fairness.[27] Only by doing so can AI be used responsibly in criminal trials, without undermining constitutional rights or the integrity of the justice system.
References
Cases
[5] Anvar P.V. v. P.K. Basheer, (2014) 10 S.C.C. 473 (India).
[9] Selvi v. State of Karnataka, (2010) 7 S.C.C. 263 (India).
[12] State v. Navjot Sandhu, (2005) 11 S.C.C. 600 (India).
[17] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[16] Daubert v. Merrell Dow Pharm., 509 U.S. 579 (1993).
Statutes / Constitutional Provisions
[2] Bharatiya Sakshya Adhiniyam, 2023, §§ 63–65 (India).
[4] Bharatiya Sakshya Adhiniyam, 2023, § 63 (India).
[14] Bharatiya Sakshya Adhiniyam, 2023, §§ 63–65 (India).
[20] Bharatiya Sakshya Adhiniyam, 2023, §§ 63–65 (India).
[26] Bharatiya Sakshya Adhiniyam, 2023, §§ 63–65 (India).
[1] Indian Evidence Act, No. 1 of 1872, §§ 61–65 (India).
[8] CONST. art. 21 (India).
[10] CONST. art. 20(3) (India).
[18] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final, arts. 6–9.
Secondary Sources / Reports / Guidelines
[3] Kate Crawford & Ryan Calo, There is a Blind Spot in AI Research, 16 Nat’l Sci. Rev. 1, 2 (2018); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 676–78 (2016).
[6] Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 676–78 (2016).
[22] Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 711–12 (2016).
[27] Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 711–12 (2016).
[11] Robert Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1764–65 (2019).
[7] Patrick Grother et al., Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects, NIST Interagency Report 8280, 2019, at 1, https://doi.org/10.6028/NIST.IR.8280.
[25] Patrick Grother et al., Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects, NIST Interagency Report 8280, 2019, at 1, https://doi.org/10.6028/NIST.IR.8280.
[15] R. Subramanian, Artificial Intelligence and Evidence Law in India, 11 J. Indian L. & Soc’y 45, 52–54 (2021).
[23] R. Subramanian, Artificial Intelligence and Evidence Law in India, 11 J. Indian L. & Soc’y 45, 58 (2021).
[13] Richard Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts 131–34 (2015).
[24] Richard Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts 131–34 (2015).
[21] Daubert v. Merrell Dow Pharm., 509 U.S. 579, 593–94 (1993).
[19] Nat’l Policing Digital Forensics Guidelines, U.K. Home Office (2020), https://www.gov.uk/government/publications/digital-forensics-guidelines.




