Published on 12th June 2025
Authored By: Iffat Zehra
St Joseph's College of Law
Abstract
The integration of artificial intelligence (AI) into the Indian justice system has ignited new conversations about the viability of due process. Through the use of AI tools in many aspects of the governance system (including policing and judicial administration) we are facing new challenges that threaten our fundamental rights to a fair trial. This article critically examines the relationship between algorithmic decision-making and constitutional entitlements under Article 21 of the Indian Constitution. Through an analysis of current practice, standards of review, and the authority of courts we will identify shortcomings related to accountability, transparency and procedural fairness. The article will consider comparable jurisprudence and international standards to suggest a regulatory framework to protect due process in an AI context.
Keywords: Artificial Intelligence, Judicial Accountability, Fair Trial, Due Process, Predictive Policing, Indian Constitution, Article 21, Legal Safeguards, Transparency, Algorithmic Bias
Introduction
The rapid implementation of artificial intelligence (AI) in public administration, worldwide and in India, has ushered in new paradigms in the administration of justice. AI systems are now used in most areas of criminal adjudication and law enforcement such as predictive policing, determination of bail, risk assessment, and even suggestion of sentences and are being investigated for their capacity to automate case management, aid legal research, and aid judicial decision-making. Though these technologies promise efficiency and consistency, they also pose significant threats to undermining fundamental legal principles, particularly due process rights and the right to a fair trial, guaranteed, for example, under Article 21 of the Constitution of India. This article is thus an effort to examine whether algorithmic decision-making compromises due process and whether the use of AI in judicial and law enforcement processes is consistent with the right to a fair trial and, if consistent, how legal systems can adapt to ensure that technological innovation is not incompatible with basic legal protection.
Body
The concept of due process-
Due process, which is a foundational principle of constitutional democracies, is the legal requirement that the state must respect all legal rights that are owed to a person. The requirement of due process is comprised of the right to fair, open, and impartial legal actions conducted in orderly ways. Thus, while due process originates from the Magna Carta, it is also found in the constitutions and laws of many jurisdictions-national and international. Although not specifically included in the Indian Constitution, it has been interpreted through the courts as an inherent part of Article 21 (the right to life and personal liberty). This understanding was laid down by the Supreme Court in “Maneka Gandhi v Union of India” in which it ruled that any action affecting life and liberty must comply with the procedural law that is just, fair, and reasonable. [1] Due Process has since been expanded to include the right to a fair trial, the right to legal counsel, presumption of innocence until proven guilty, and so on. The U.S. asserts due process by the Fifth and Fourteenth Amendments. The overarching question for the courts is how do we interpret these traditional notions of fairness when decisions are being made not by humans, but by algorithms.
Algorithmic decision – making in justice systems –
Algorithmic decision-making is when computer programs use data and create decisions/recommendations. They can vary from straightforward rule-based algorithms to sophisticated machine learning models identifying patterns and relationships drawn from extensive data. In the criminal justice environment, algorithmic decision-making is often used in the following examples: for predictive policing (e.g., identifying crime ‘hot spots’); risk assessment tools (e.g., identifying the risk of re-offending); sentencing recommendations; facial recognition in investigations; and prioritization of cases in courts. In India, AI systems are increasingly being deployed in facial recognition (in policing), predictive policing, and court automation tools. For example, Delhi Police have used AI-based facial recognition technology (FRT) for surveillance and investigations. As an example, the Supreme Court’s AI Committee piloted tools such as SUPACE (Supreme Court Portal for Assistance in Court Efficiency) – a tool designed to help judges by summarizing case files, recommend relevant statutes and generate various legal precedents for the judge. When utilized properly, AI technology can increase efficiencies. However, these tools usually operate as black boxes, lacking transparency, public consultation, legislative or statutory background and therefore difficult to understand, contest or audit the grounds of their decisions making processes, and more importantly horizontal and vertical procedural fairness for judicial decision making when decisions impacting the rights of individuals are rendered by these tools.
Due process concerns in the age of AI present significant challenges –
(i) Lack of Transparency: The black-box character associated with the judicial and law enforcement AI systems supports very little information about the system’s algorithm, data used, or decision-making logic. This lack of transparency severely limits a defendant’s chances of understanding or challenging an outcome based on such tools. The Wisconsin Supreme Court, in State v. Loomis, upheld the introduction of the COMPAS risk assessment tool in determining a sentence, while simultaneously acknowledging concerns about its opacity and the ability of a defendant to challenge or even gain knowledge of the inputs or logic behind the decision. [2]
(ii) Algorithmic Bias: AI systems learns from historical data that contains imprints of biases, be it racial, caste-based, or socio-economic. Such biased data, when fed into an AI tool like predictive policing software, may result in some sectors being over- surveilled while others are being protected, thereby transgressing the equality doctrine. In the Indian context, if the data used have systematic bias against some community, it is likely to lead to discrimination when algorithmic prediction is influenced by caste or religion.
(iii) Accountability Gaps: In traditional judicial proceedings, a human decision-maker can be held accountable. However, when AI systems are used to make or influence legal decisions, identifying who is responsible becomes complex. This diffusion of accountability undermines the process of judicial review and weakens the individual’s right to seek redress an essential element of due process.
Global Legal Developments and Case Law –
- In ‘K.S. Puttaswamy v Union of India’, the Supreme Court identified the right to privacy as a part of life and liberty under Article 21. [3] The ruling focused on data protection and consent, which are most important in the case of AI systems that are extremely reliant on personal data.
- In ‘Justice K.S. Puttaswamy (Retd.) v Union of India’, the Court validated the Aadhaar scheme but reasserted proportionality and procedural protection when state action impacts fundamental rights. [4]
- Earlier this year, the Delhi High Court in ‘S. Somanathan v Union of India’ raised the concern of arbitrary application of FRT by law enforcement agencies on the grounds of lack of legislative guidelines for such technologies. [5]
- The NITI Aayog report on AI focused on ethical and accountability-related issues and not on a legal structure. Courts in India have not yet experienced substantial litigation in Indian courts on AI in legal decision-making, but as AI advances the lawsuits are coming.
- The U.S. Wisconsin Supreme Court upheld the application of a risk prediction algorithm (COMPAS) for the purpose of sentencing in ‘State v. Loomis’ but admonished against blind reliance on proprietary algorithms. [2]
- There is a provision regarding a right to explanation under Automated Decision Making in the General Data Protection Regulation (GDPR) of the European Union. The proposed EU AI Act, which was designed to deal with extremely high-risk AI systems, shall include those that are being applied in legal and law enforcement contexts.
- The UK’s Court of Appeal ruled in ‘R (Bridges) v Chief Constable of South Wales Police’ that the use of live facial recognition technology violates rights to privacy and is insufficient in legal clarity, which puts a greater emphasis on the necessity for statutory safeguards. [6]
- Safeguarding due process in India –
- Legislative Structure: India should create a comprehensive legislative structure to govern the use of AI in judicial and law enforcement domains. The legislation should require AI tools to be certified before they can be used, that there is periodic review and impact assessment processes, and that the penalties for misuse or for anti-discriminatory behaviour.
- Transparency Requirements: The algorithms used in justice systems must be explainable, meaning that both the input data and the logic by which the decision is made can be assessed and hence scrutinised. As part of the legislation, governing authorities should be obliged to disclose the algorithmic logic, the sources of data, and the performance metrics. The regulation or statute could prescribe limiting algorithmic decision-making to the open-sourcing of algorithms and independent audits that are not only preserving fairness but also restores public trust.
- Human Intervention: AI tools should augment a human intervention rather than replace a human judgement. Judges must always possess ultimate authority of legal judgement and must always have to justify any reliance upon algorithmic advice. All judicial officers should be trained in the benefits of AI, that is, educating judicial officers in the ethical and effective use of Ai.
- Right to Challenge: People must have a right to challenge and appeal against decisions made or influenced by AI. This includes access to information about how an algorithm has been designed, which data it has been trained on, how it has been trained and when various performance levels are achieved so that an effective legal challenge can be mounted.
- Bias Reduction: It is essential to ensure AI training document consists of data which is representative and contains no historical or systemic biases. Developers must be openly testing regularly for disparate impacts, implementing corrective actions if needed to avoid biased and discriminatory outcomes, most importantly considering the extent of diversity and inequality in India.
Conclusion
With India adapting to technology innovation in its justice delivery system, due process is becoming more complicated and essential. AI has significant benefits in terms of efficiency and decision-making, but it also poses grave dangers to fundamental legal principles like equity, accountability, and transparency if left unregulated. The enhancement of justice through AI should not, however, mean the compromise of the rights it is intended to protect. Thus, AI must be cautiously embedded within a forward-looking legal and ethical framework prioritizing human rights and the rule of law. Such a framework should be based on tenets of transparency, accountability, and fairness. In this regard, courts, legislatures, and civil society would need to work together to ensure that any AI deployment does not erode but strengthen foundational values of the legal system.
References
- Maneka Gandhi v Union of India AIR 1978 SC 597.
- State v Loomis 881 NW 2d 749 (Wis 2016).
- S. Puttaswamy v Union of India (2017) 10 SCC 1.
- Justice K.S. Puttaswamy (Retd.) v Union of India* (2018) 1 SCC 809.
- Somanathan v Union of India* WP(C) No 679/2020 (Del HC).
- R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058.
- Julia Angwin and others, ‘Machine Bias’ (ProPublica, 23 May 2016) (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) (visited on 16 April 2025).
- Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76.
- NITI Aayog, ‘Responsible AI for All’ (Part 1 – Principles for Responsible AI, 2021).
- Indian Council for Research on International Economic Relations (ICRIER), ‘AI and Governance in India’ (2022) (https://icrier.org/pdf/AI_and_Governance_Report.pdf ) (16 April 2025).