Constitutionalizing the Algorithm through Reimagining Legal Accountability in India’s AI Governance

Published on 12th June 2025

Authored By: Devesh Mandhata
Indian Institute of Management Rohtak [IIM-R]

Abstract

Can a constitutional democracy protect civil liberties when decisions are made by opaque algorithms rather than human officials? As India expands the use of artificial intelligence in public systems, from welfare delivery to surveillance, key legal questions emerge. Existing laws like the Information Technology Act and the Data Protection Act do not address fairness, explain ability, or redress. Courts remain silent on algorithmic harms, and public institutions lack the capacity to scrutinize these technologies. This paper examines whether India’s current legal framework can handle the risks posed by AI in governance. It draws insights from global models such as the European Union AI Act, the United States AI Bill of Rights, and China’s algorithmic auditing rules. These are used to develop a context-sensitive solution for India. The paper proposes a unified legal approach based on constitutional values of equality, dignity, and due process. It argues for a rights-based regulatory law, independent oversight bodies, and public participation. The goal is to ensure that artificial intelligence does not bypass democratic accountability. Instead, it must function within a legal order that protects individual rights and upholds the rule of law.

Introduction

The spread of algorithmic decision-making in India (encompassing predictive policing, welfare allocation, biometric authentication, and credit scoring) has fundamentally altered the State’s engagement with its citizens. These systems, embedded in applications from Delhi’s facial recognition surveillance to Aadhaar-linked welfare exclusions, wield significant power to classify, prioritize, and exclude, often without transparency or accountability.[1] Such opacity supplants procedural fairness, eroding the constitutional commitment to reasoned governance.[2]

This technological shift raises profound concerns. Algorithmic “black boxes” obscure decision-making processes, leaving individuals without explanations, recourse, or avenues for review.[3] The inherent complexity of machine learning resists scrutiny, and public deployments often evade deliberative oversight.[4] Consequently, the constitutional guarantee of transparent, justifiable decision-making is at risk.[5]

Accountability in this context transcends technical audits. It requires auditable systems, justiciable outcomes, and adherence to constitutional norms.[6] Yet, India’s legal framework, primarily the Information Technology Act 2000 and the evolving data protection regime, lacks the coherence to address the multifaceted harms of opaque algorithmic governance.[7] These instruments are fragmented and theoretically inadequate for regulating algorithmic authority.[8]

This article employs a doctrinal and comparative approach, analyzing Indian constitutional jurisprudence alongside global regulatory frameworks, to propose a model rooted in transparency, liability, and procedural fairness.9 The question is not whether regulation is needed, but what form of regulation aligns with India’s constitutional morality.[9]

Mapping the Legal Void

The rapid integration of artificial intelligence (AI) into India’s public and private sectors has outstripped the development of a cohesive legal framework, resulting in a patchwork of inadequate regulations, cautious judicial engagement, and unresolved doctrinal tensions.[10] This chapter examines these deficiencies and their implications for constitutional governance.

A. Statutory Frameworks (Inadequacies and Oversights)

India’s primary technology statute, the Information Technology Act 2000, was enacted to address cybercrimes and electronic commerce, not AI governance.[11] It lacks provisions mandating transparency, fairness, or explain ability in algorithmic decision-making, leaving critical ethical questions unaddressed.[12] Similarly, the Digital Personal Data Protection Act 2023, while advancing data protection, offers scant guidance on algorithmic profiling.[13] It imposes no explicit obligations on data fiduciaries to mitigate discriminatory outcomes from automated processing, depriving individuals of robust recourse against algorithmic harms.[14]

B. Judicial Engagement (Caution and Constraints)

Indian courts have yet to grapple comprehensively with AI’s challenges. In KS Puttaswamy v Union of India, the Supreme Court recognized privacy as a fundamental right under Article 21 but did not explore algorithmic decision-making’s impact on autonomy or dignity.[15] Likewise, in the Pegasus spyware litigation, the Court acknowledged surveillance technologies’ potential to violate rights but stopped short of establishing standards for their oversight.[16] This judicial reticence highlights the need for proactive legal frameworks to address AI’s complexities.[17]

C. Constitutional Tensions (Equality, Fairness, and Autonomy)

AI deployment implicates core constitutional guarantees. Article 14’s promise of equality is undermined by algorithmic biases arising from skewed training data or opaque processes, which can produce discriminatory outcomes.[18] Article 21’s assurance of procedural fairness is jeopardized when automated decisions lack transparency or appeal mechanisms, denying individuals the ability to contest adverse outcomes.[19] The Puttaswamy judgment’s emphasis on privacy, dignity, and autonomy is further challenged by AI systems that process personal data without informed consent or transparency, necessitating robust regulatory safeguards.[20]

D. Liability and Legal Personhood: Uncharted Territory

Attributing liability for AI-driven harms poses a significant challenge. Traditional legal frameworks assign responsibility to human or juridical persons, but autonomous AI systems defy these categories.[21] Proposals to grant AI legal personhood aim to facilitate accountability but risk diluting human responsibility and complicating enforcement.[22] Such approaches could undermine doctrines like the corporate veil, enabling entities to evade liability by attributing actions to AI agents.24 These unresolved questions underscore the urgency of a tailored liability framework for AI governance.[23]

Comparative Models of Algorithmic Governance

Chapter II exposed the fragmented statutory frameworks, cautious judicial engagement, and doctrinal tensions that impede India’s regulation of algorithmic governance, leaving constitutional guarantees vulnerable. [24] To address these gaps, this chapter examines comparative legal architectures in the European Union, the United States, and China, evaluating their normative insights and translatability into India’s context, constrained by institutional peculiarities, constitutional imperatives, and techno-legal capacity deficits.[25]

A. The EU’s Risk-Based Regime

The European Union’s proposed AI Act, nearing legislative finalization, offers a comprehensive framework by classifying AI systems into risk categories (unacceptable, high, limited, and minimal) with tailored obligations.[26] High-risk systems, such as those in law enforcement or biometric identification, require transparency documentation, human oversight, and ex ante conformity assessments.[27] This model embeds European values of human dignity, non-discrimination, and procedural fairness, aligning with constitutional morality but demanding robust institutional capacity India currently lacks.[28]

B. The U.S. Model

The U.S. Blueprint for an AI Bill of Rights, though non-binding, articulates five principles: safe systems, anti-discrimination protections, data privacy, notice and explanation, and human alternatives. 31 Sector-specific guidance, such as the Equal Employment Opportunity Commission’s directives on algorithmic bias in hiring, bolsters procedural redress.[29] This framework has spurred civil rights litigation against discriminatory AI outcomes in credit scoring and predictive policing, offering a decentralized, rights-oriented approach adaptable to India’s litigious culture but limited by its lack of enforceable mandates.[30]

C. China’s Algorithmic Control Model

China’s 2022 Provisions on Algorithmic Recommendation mandate platforms to register algorithms with the Cyberspace Administration of China, disclose recommendation logic, and  provide opt-out controls.[31] The state’s authority to conduct algorithmic audits reflects a centralized control model incompatible with India’s democratic ethos.[32] Yet, its emphasis on ex ante registration raises pertinent questions for regulating India’s public-sector AI, such as welfare or policing systems.[33]

D. Indian Constraints (Structural Asymmetries and Democratic Deficits)

India’s unique challenges hinder wholesale adoption of these models. First, the State-asdeveloper dilemma creates conflicts when the State deploys AI, as seen in Telangana Police’s opaque use of crime prediction tools or Aadhaar-linked welfare exclusions without appeal mechanisms.[34] Second, institutional independence is precarious; a regulatory body akin to the EU’s Artificial Intelligence Board would require constitutional insulation, a challenge given the executive-dominated board under the Digital Personal Data Protection Act 2023.[35] Finally, judicial algorithmic literacy remains limited, as evidenced in Anuradha Bhasin v Union of India and PUCL v Union of India, where courts acknowledged digital technology’s implications but refrained from robust scrutiny standards, perpetuating a jurisprudential vacuum.[36] These constraints demand a tailored regulatory approach rooted in India’s constitutional framework.[37]

 Designing a Unified Indian Framework for Algorithmic Accountability

Chapter III illuminated the normative strengths and contextual limits of global algorithmic governance models, underscoring India’s unique constraints. Building on these insights, this chapter proposes a unified, constitutionally anchored framework – the Algorithmic Accountability and Rights Protection Act (AARPA) – to reconcile innovation with human rights, administrative efficiency with procedural justice, and market-led development with public law accountability.[38]

A. Foundational Norms (Constitutionalising Algorithmic Design)

Explain ability must be enshrined as a fundamental right under Article 21’s guarantee of procedural fairness.[39] Public-facing algorithmic systems (in welfare, policing, or digital governance) must provide reasoned decisions, rejecting opacity as a defense for denying rationales behind classifications or exclusions.[40] A structured liability regime, assigning differentiated duties to developers, data controllers, and deployers, should adopt a “duty of care” standard scaled by AI system risk and rights impact.[41] Liability for harms from biased data or negligent deployment must extend to civil, administrative, and, where applicable, criminal responsibility.[42]

A codified non-discrimination clause, rooted in Article 14 and the doctrine of manifest arbitrariness (Shayara Bano), must prohibit algorithmic decisions with disparate impacts on protected classes – gender, caste, religion, or socio-economic status.[43] Unlike the Digital Personal Data Protection Act 2023, which sidesteps profiling harms, this regime must prioritize substantive equality in data-driven governance.[44]

B. Institutional Infrastructure

A two-tier institutional structure is essential:

  1. National AI Ombudsman: A constitutionally insulated office, independent of executive control, empowered to adjudicate grievances against algorithmic systems used by public or large private actors.[45] Modelled on the Banking Ombudsman, it must have binding powers and statutory timelines for redress.[46]
  2. Algorithmic Audit Boards: Sector-specific, quasi-judicial bodies comprising technologists, legal experts, ethicists, and civil society members, tasked with ex ante certification and periodic review of high-risk AI systems.[47] These Boards would publish transparency logs, audit datasets, and assess systemic bias, adapting EU conformity assessments to Indian administrative law.[48]

C. Procedural Safeguards and Public Rights

Procedural entitlements must include:

  1. Notice-before-deployment: Public-sector AI systems require prior disclosure to affected communities, detailing function, scope, and recourse rights, akin to environmental impact assessments.[49] This ensures deliberative legitimacy.
  2. Right to Object to Automated Decisions: Drawing from GDPR Article 22, individuals must have the right to opt out of solely automated decisions with significant effects, supported by human-in-the-loop safeguards and post-decision review.[50]

D. Legislative Blueprint

The proposed AARPA must harmonise with, but not be subordinated to, the Information Technology Act 2000 and DPDP Act 2023.[51] It should include:

  1. Risk-based classification of AI systems.
  2. Detailed duties of care and accountability for developers and deployers.
  3. Mandatory registration and algorithmic impact assessments for high-risk or publicfacing systems.
  4. Cross-references to IT Rules to avoid overlaps, with override clauses for gaps.[52]

Public consultation, absent in the DPDP Act’s enactment, must be statutorily mandated to reflect India’s pluralistic democracy.[53]

E. Indigenous Constitutional Ethos (Reclaiming Normative Agency)

Mimicking foreign models risks misalignment with India’s constitutional culture.[54] Indigenous doctrines such as substantive due process, proportionality (Modern Dental College), and manifest arbitrariness must anchor AI governance.[55] Article 21’s right to life and liberty, as expanded in Puttaswamy, should encompass algorithmic fairness, redress, and dignity in the digital age, ensuring a framework that is both globally informed and locally resonant.[56]

Conclusion

Chapter IV proposed the Algorithmic Accountability and Rights Protection Act, a unified framework rooted in constitutional norms, institutional oversight, and procedural safeguards to govern India’s AI landscape.[57] This conclusion synthesizes the urgency of that vision, arguing that without a coherent legal response, algorithmic governance threatens to erode the constitutional liberties of equality, dignity, and fairness.61

Opaque welfare algorithms and unchecked facial recognition systems exemplify the risks of unaccountable AI, sidelining Article 14’s non-arbitrariness and Article 21’s procedural guarantees.[58] Accountability cannot be a reactive fix; it demands ex ante design – transparent mandates, robust liability, and enforceable rights embedded in AI’s architecture. [59] The Information Technology Act 2000 and Digital Personal Data Protection Act 2023, while relevant, lack the public law grounding to address systemic biases or ensure democratic legitimacy.[60]

India’s path forward requires a threefold approach: aligning AI regulation with constitutional jurisprudence, enacting a dedicated statutory framework, and establishing robust institutions like a National AI Ombudsman and Algorithmic Audit Boards.[61] Governance must transcend technical competence, anchoring itself in moral and democratic principles. 66 Only by constitutionalizing the algorithm (making all power, human or machine-driven, answerable to the people) can India uphold its founding promise of accountable governance.[62]

 

References

[1] See, eg, Vidushi Marda, ‘Data Flow and Friction: Aadhaar and the Architecture of Exclusion’ (2020) 12 Indian J L & Tech 1, 10-12.

[2] Constitution of India 1950, art 14 (guaranteeing equality before the law and equal protection).

[3] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015) 3-6.

[4] Anupam Chander, ‘The Racist Algorithm?’ (2017) 115 Mich L Rev 1023, 1030-32.

[5] KS Puttaswamy v Union of India [2017] 10 SCC 1, paras 310-12 (Chandrachud J).

[6] Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Wash U L Rev 1249, 1254-56.

[7] Information Technology Act 2000, s 43A;Digital Personal Data Protection Act 2023, s 7.

[8] Amber Sinha, ‘The Algorithmic Society: The Indian Context’ (2021) 15 Jindal Global L Rev 45, 50-53. 9 See, eg, European Union, General Data Protection Regulation (EU) 2016/679 [2016] OJ L119/1, art 22 (regulating automated decision-making).

[9] Navtej Singh Johar v Union of India [2018] 10 SCC 1, para 253 (Misra CJ) (emphasizing constitutional morality).

[10] Amber Sinha, ‘The Algorithmic Society: The Indian Context’ (2021) 15 Jindal Global L Rev 45, 47-49.

[11] Information Technology Act 2000, ss 43-44.

[12] Vidushi Marda, ‘Data Flow and Friction: Aadhaar and the Architecture of Exclusion’ (2020) 12 Indian J L & Tech 1, 15-17.

[13] Digital Personal Data Protection Act 2023, s 7 (data processing obligations).

[14] Chinmayi Arun, ‘AI and the Rule of Law: The Missing Indian Framework’ (2022) 10 Indian J Const L 78, 8284.

[15] KS Puttaswamy v Union of India [2017] 10 SCC 1, paras 168-70 (Chandrachud J).

[16] Amnesty International v Union of India [2021] SCC OnLine SC 1002, paras 12-15.

[17] Anupam Chander, ‘The Racist Algorithm?’ (2017) 115 Mich L Rev 1023, 1040-42.

[18] Constitution of India 1950, art 14; Anuj Garg v Hotel Association of India [2008] 3 SCC 1, para 30 (equality and non-arbitrariness).

[19] Constitution of India 1950, art 21; Maneka Gandhi v Union of India [1978] 1 SCC 248, para 56 (procedural fairness).

[20] KS Puttaswamy (n 6) paras 310-12.

[21] Danielle Keats Citron and Robert Chesney, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 Calif L Rev 1753, 1770-72.

[22] Lawrence B Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 NC L Rev 1231, 1250-53. 24 John C Coffee Jr, ‘No Soul to Damn: No Body to Kick: An Unscandalized Inquiry into the Problem of Corporate Punishment’ (1981) 79 Mich L Rev 386, 401-03 (on corporate veil).

[23] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015) 190-93.

[24] Amber Sinha, ‘The Algorithmic Society: The Indian Context’ (2021) 15 Jindal Global L Rev 45, 47-49.

[25] Chinmayi Arun, ‘AI and the Rule of Law: The Missing Indian Framework’ (2022) 10 Indian J Const L 78, 8082.

[26] Proposal for a Regulation on Artificial Intelligence (AI Act) COM (2021) 206 final, arts 6-7.

[27] Ibid arts 13, 15, 16.

[28] Navtej Singh Johar v Union of India [2018] 10 SCC 1, para 253 (Misra CJ) (constitutional morality). 31 White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights (2022) 12-15 <www.whitehouse.gov/ostp/ai-bill-of-rights> accessed 15 April 2025.

[29] Equal Employment Opportunity Commission, ‘Technical Assistance Document on AI and Employment’ (2023) <www.eeoc.gov> accessed 15 April 2025.

[30] Kate Crawford, ‘The Trouble with Bias’ (2017) 1 NIPS Conf Proc 1, 3-5.

[31] Provisions on the Administration of Algorithmic Recommendation in Information Services (China, 2022) arts 7, 12 <www.cac.gov.cn> accessed 15 April 2025.

[32] Anupam Chander, ‘The Racist Algorithm?’ (2017) 115 Mich L Rev 1023, 1045-47.

[33] Vidushi Marda, ‘Data Flow and Friction: Aadhaar and the Architecture of Exclusion’ (2020) 12 Indian J L & Tech 1, 18-20.

[34] KS Puttaswamy v Union of India [2017] 10 SCC 1, paras 310-12 (Chandrachud J) (privacy and state accountability).

[35] Digital Personal Data Protection Act 2023, s 17; Justice BN Srikrishna Committee, A Free and Fair Digital Economy (2018) 78-80 (recommending independent DPA).

[36] Anuradha Bhasin v Union of India [2020] 3 SCC 637, paras 45-47; PUCL v Union of India [1997] 1 SCC 301, para 17.

[37] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015) 200-03.

[38] Chinmayi Arun, ‘AI and the Rule of Law: The Missing Indian Framework’ (2022) 10 Indian J Const L 78, 8587.

[39] Constitution of India 1950, art 21; Maneka Gandhi v Union of India [1978] 1 SCC 248, para 56.

[40] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015) 140-42.

[41] Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Wash U L Rev 1249, 1300-02.

[42] Anupam Chander, ‘The Racist Algorithm?’ (2017) 115 Mich L Rev 1023, 1035-37.

[43] Shayara Bano v Union of India [2017] 9 SCC 1, para 101 (manifest arbitrariness); Constitution of India 1950, art 14.

[44] Digital Personal Data Protection Act 2023, s 7; Vidushi Marda, ‘Data Flow and Friction: Aadhaar and the Architecture of Exclusion’ (2020) 12 Indian J L & Tech 1, 22-24.

[45] Justice BN Srikrishna Committee, A Free and Fair Digital Economy (2018) 78-80 (independent oversight).

[46] Reserve Bank of India, Banking Ombudsman Scheme (2006) cl 12 (binding powers).

[47] Proposal for a Regulation on Artificial Intelligence (AI Act) COM (2021) 206 final, art 16 (conformity assessments).

[48] Amber Sinha, ‘The Algorithmic Society: The Indian Context’ (2021) 15 Jindal Global L Rev 45, 60-62.

[49] Environment Protection Act 1986, s 3(2)(v) (impact assessments).

[50] General Data Protection Regulation (EU) 2016/679 [2016] OJ L119/1, art 22.

[51] Information Technology Act 2000, ss 43A, 79; Digital Personal Data Protection Act 2023, s 17.

[52] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, r 4.

[53] Anuradha Bhasin v Union of India [2020] 3 SCC 637, para 45 (public consultation).

[54] KS Puttaswamy v Union of India [2017] 10 SCC 1, paras 310-12 (Chandrachud J).

[55] Modern Dental College v State of Madhya Pradesh [2016] 7 SCC 353, para 60 (proportionality); Shayara Bano (n 7) para 101.

[56] Puttaswamy (n 18) paras 168-70 (privacy and dignity).

[57] Navtej Singh Johar v Union of India [2018] 10 SCC 1, para 253 (Misra CJ) (constitutional morality). 61 Constitution of India 1950, arts 14, 21; KS Puttaswamy v Union of India [2017] 10 SCC 1, paras 310-12 (Chandrachud J).

[58] Vidushi Marda, ‘Data Flow and Friction: Aadhaar and the Architecture of Exclusion’ (2020) 12 Indian J L & Tech 1, 18-20.

[59] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015) 140-42.

[60] Information Technology Act 2000, s 43A; Digital Personal Data Protection Act 2023, s 7.

[61] Justice BN Srikrishna Committee, A Free and Fair Digital Economy (2018) 78-80 (independent oversight). 66 Chinmayi Arun, ‘AI and the Rule of Law: The Missing Indian Framework’ (2022) 10 Indian J Const L 78, 8587.

[62] Maneka Gandhi v Union of India [1978] 1 SCC 248, para 56 (procedural fairness).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top