AI and Legal Responsibility: The Case for Algorithmic Accountability in Indian Jurisprudence

Published On: September 11th 2025

Authored By: Rashi Agarwal

ABSTRACT: 

The unprecedented rise of Artificial Intelligence (AI) has fundamentally altered decision- making processes across sectors such as governance, healthcare, finance, and law enforcement in India. From facial recognition surveillance to algorithmic welfare distribution, AI is increasingly influencing human lives. However, this rapid technological expansion has outpaced the Indian legal system, leaving a significant gap in accountability mechanisms when AI systems cause harm. Traditional jurisprudence, which is centered on human intent and liability, finds itself ill-equipped to address the legal complexities of machine-generated decisions.

It delves into the constitutional implications under Articles 14 and 21, highlighting how algorithmic bias may violate the right to equality and privacy. The paper also draws from global developments such as the EU AI [1]Act and the U.S. Algorithmic Accountability Act [2]to propose a comparative framework for India. Using a doctrinal methodology, the study argues for a hybrid liability model where both developers and deployers are held accountable depending on the risk associated with the AI system.

Additionally, this paper outlines the need for legal reform in areas such as data protection, public transparency, and regulatory oversight. It emphasizes that without a robust legal structure, AI will continue to operate in a grey zone, potentially undermining fundamental rights. The study concludes that a constitutional and rights-based approach is essential to ensure ethical, responsible, and legally compliant use of AI technologies in India.

KEYWORDS: AI Accountability, Algorithmic Bias, Indian Constitution, Fundamental Rights, Legal Responsibility, Data Protection, Technology Law

INTRODUCTION:

Artificial Intelligence (AI) is no longer a futuristic concept—it is embedded deeply in our present-day systems, influencing decisions across governance, finance, healthcare, education, and law enforcement. In India, AI is being increasingly deployed by government agencies and private institutions for tasks ranging from facial recognition in policing to predictive algorithms in public welfare schemes. However, this growing reliance on autonomous systems raises crucial legal and ethical questions, especially when these algorithms make flawed decisions or perpetuate systemic bias. When an AI model discriminates or causes harm, who is to be held responsible—the developer, the deployer, or the AI itself?

Indian law currently lacks a comprehensive framework to address these emerging challenges. While statutes like the Information Technology Act, 2000 and [3]the Digital Personal Data Protection Act, 2023 [4]touch upon issues related to data processing and privacy, they do not directly deal with the liability and accountability mechanisms for AI systems. This creates a regulatory vacuum, exposing individuals to rights violations without clear legal remedies.

As the Indian judiciary begins to encounter disputes involving algorithmic decision-making, there is an urgent need to explore how existing constitutional protections—such as the right to equality under Article 14 and the right to life and liberty under Article 21—can be interpreted in light of AI’s role. This research paper seeks to investigate the jurisprudential and statutory developments needed to ensure that AI technologies function within the bounds of law and uphold democratic values such as transparency, fairness, and accountability.

MEANING:

Artificial Intelligence, or AI, refers to the simulation of mortal intelligence by machines, particularly computer systems able of performing tasks that generally bear mortal cognition. These include literacy, logic, problem- working, decision- timber, and language processing. In the legal environment, AI tools are decreasingly stationed in tasks similar as prophetic policing, legal exploration, threat assessment in bail operations, and automated decision- making in public weal distribution.

The term “ algorithmic responsibility ” is a fairly new conception, which refers to the obligation of inventors, druggies, and institutions to insure that AI systems operate transparently, fairly, and in compliance with the law. It underscores the idea that just because a machine performs a task does n’t mean that legal responsibility is removed from the equation.

This responsibility becomes pivotal when AI systems produce poisoned issues, make discriminative opinions, or infringe on an existent’s rights. Legal responsibility in the age of AI goes beyond conventional liability. It demands an interdisciplinary approach involving law, ethics, data wisdom, and public policy.

For illustration, if an AI- grounded credit scoring model disproportionately rejects loan operations from certain estate or income groups, who’s responsible the software company, the data provider, or the bank that stationed the model?

Hence, understanding the meaning of AI in law requires not only specialized mindfulness but a deeper sapience into how opinions are made, who controls the algorithms, and how similar control aligns with indigenous values. It also necessitates examining the current gap between technological practice and legal safeguards in India.

HISTORY:

The emergence of Artificial Intelligence as a subject of legal concern is a recent phenomenon in Indian jurisprudence, though the roots of algorithmic systems date back several decades globally. AI initially found its place in technical research and automation, but as its capabilities grew—particularly in pattern recognition, natural language processing, and predictive analytics—it began influencing core public functions such as healthcare diagnostics, financial modeling, and law enforcement operations.

Globally, the debate around AI accountability took a legislative turn with the introduction of the EU General Data Protection Regulation (GDPR) in 2016[5], which included provisions for algorithmic transparency. Similarly, the U.S. Algorithmic Accountability Act, [6]though not yet passed, aims to enforce transparency in automated decision systems that impact critical services like housing, employment, and credit.

In India, there has been no direct legislation addressing AI liability, but policy initiatives such as the NITI Aayog’s Discussion Paper on Responsible AI (2021) [7]have stressed the need   fairness, transparency, and accountability. The Digital Personal Data Protection Act, 2023 is a foundational step, laying down consent-based data processing norms but still silent on AI decision-making.

Judicially, Indian courts have not yet ruled decisively on algorithmic harms. However, landmark cases like Justice K.S. Puttaswamy v. Union of India (2017) [8]laid the groundwork for protecting informational privacy, which directly relates to how AI systems collect and process data. This legal vacuum highlights the need for dedicated AI legislation that can evolve with technological advancement.

LEGAL FRAMEWORK IN INDIA: 

India does n’t yet have a devoted legislation addressing Artificial Intelligence or algorithmic responsibility. still, certain being legal vittles and judicial pronouncements give a primary frame that laterally governs the deployment and regulation of AI systems. This fractured legal structure presently revolves around three main pillars indigenous protections, data protection laws, and technology regulation bills.

  1. Indigenous Protections The Indian Constitution offers a strong foundation for securing individual rights against AI- convinced Composition 14 ensures equivalency before the law and prohibits arbitrary state action.However, it may amount to a violation of this provision, If an AI system used by the government disproportionately affects a particular group or lacks translucency in its decision- making process. also, Composition 21 guarantees the right to life and particular liberty, which, post Justice K.S. Puttaswamy v. Union of India( 2017), [9]includes the right to sequestration. Any AI operation that infringes on instructional autonomy or data quality would have to pass the test of proportionality laid down in the Puttaswamy judgment.
  2. The Information Technology Act, 2000 The IT Act[10], which governs digital conduct and cyber operations, does n’t explicitly mention still, Section 43A deals with compensation for failure to cover sensitive particular data. This section can be extended to AI systems recycling particular information without acceptable safeguards. also, the rules under the IT Act, similar as the Information Technology( Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, offer limited protection.
  3. Digital Personal Data Protection Act, 2023 [11]The DPDP Act is India’s newest legislation in the realm of data governance. It requires all realities nominated “ Data Fiduciaries ” to insure fair, legal, and transparent processing of particular data. Although the Act does n’t specifically target AI, its principles of informed concurrence, purpose limitation, and grievance redressal are directly applicable to AI systems. still, it lacks specific clauses on algorithmic profiling or automated decision- timber, making it inadequate as a standalone nonsupervisory tool for AI.
  4. Sectoral programs and enterprise piecemeal from these legal instruments, India has seen policy- position recognition of the need for ethical The NITI Aayog’s Responsible AI Strategy( 2021) [12]calls for fairness, inclusivity, translucency, and responsibility. While this document lacks legal enforceability, it’s a pivotal step toward formulating a cohesive nonsupervisory approach. In sum, the current legal frame in India provides only fractured and circular tools to address AI- related damages. There’s an critical need for legislation that directly governs AI development, deployment, and responsibility, incorporating indigenous values and transnational stylish practices

JUDICIAL PERSPECTIVE: 

The Indian judiciary has yet to pronounce directly on the legal liability of Artificial Intelligence systems. However, existing case law on privacy, equality, and administrative fairness provides important interpretative tools for addressing algorithmic harms. Courts have consistently underscored the State’s duty to ensure fairness, transparency, and non- arbitrariness in governance—principles that are essential when deploying AI in public decision-making.

In Justice K.S. Puttaswamy v. Union of India (2017), [13]the Supreme Court recognized the right to privacy as a fundamental right under Article 21. The judgment emphasized informational autonomy and the importance of individual control over personal data. This has direct relevance in the AI context, particularly where automated decision-making systems process sensitive personal information without consent or explanation.

In Anuradha Bhasin v. Union of India (2020), [14]the Supreme Court reiterated that restrictions on fundamental rights must meet the test of proportionality. If an AI system affects a citizen’s rights—be it in access to services, profiling, or surveillance—such interference must be justified as reasonable and proportionate. This precedent lays the groundwork for challenging opaque or biased algorithms used by the State.

The judiciary has also interpreted Article 14 to include protection against arbitrary and discriminatory administrative actions. If AI systems are found to produce biased outcomes due to flawed data or discriminatory programming, they can be challenged for violating the principle of equality before law.

Although no Indian case so far has dealt exclusively with AI accountability, these judicial principles offer a solid constitutional foundation. The absence of precedent should not be mistaken for the absence of tools. Indian courts are well-equipped to address AI-related challenges through existing constitutional jurisprudence, provided they are presented with appropriate legal arguments and evidence.

GLOBAL COMPARATIVE FRAMEWORK: 

As Artificial Intelligence becomes central to governance, legal systems around the world are scuffling with how to insure responsibility and cover abecedarian rights. While India is still in the early stages of formulating an AI-specific legal frame, several authorities have taken significant way to regulate algorithmic systems. A relative analysis offers precious perceptivity into how India might structure its own legal approach.

European Union(EU)

The EU Artificial Intelligence Act, proposed in 2021[15], is the world’s first comprehensive AI- specific legal frame. It categorizes AI systems into four threat categories — inferior, high, limited, and minimum. High- threat systems, similar as those used in biometric identification, hiring, credit scoring, and public surveillance, are subject to strict scores including translucency, mortal oversight, and regular checkups. The Act also prohibits certain uses of AI outright, similar as social scoring by governments, therefore securing popular freedoms. Importantly, it places responsibility on both providers and druggies of AI systems, introducing penalties fornon-compliance. The EU’s approach emphasizes a preventative, rights- grounded model predicated in ethical governance.

United States(U.S.)

Although the U.S. lacks a unified AI law, it has made strides through legislative proffers like the Algorithmic Responsibility Act[16]. This bill authorizations companies to conduct impact assessments for automated decision systems that affect crucial areas similar as casing,

employment, credit, and education. The U.S. model leans toward sector-specific regulations, and enforcement largely depends on agencies like the Federal Trade Commission( FTC). also, individual countries similar as California have introduced laws that include rights related to data processing and algorithmic profiling.

Canada and OECD

Countries Canada’s Directive on Automated Decision- Making( 2019) [17]applies to civil government institutions and requires AI impact assessments, public exposures, and mortal- in- the- circle mechanisms. The Organisation for EconomicCo-operation and Development( OECD) has also published AI Principles [18]espoused by 42 countries, which include fairness, responsibility, translucency, and robustness of AI systems.

Takeaways for India These global models emphasize five core pillars of AI governance : Translucency – druggies must understand how opinions are made.

Responsibility – inventors and deployers must be fairly answerable.

Human Oversight – AI should help, not replace, mortal judgment in critical sectors. Impact Assessment – Algorithms must be tested for demarcation, bias, and crimes.

Legal Remedies – Victims of algorithmic detriment must have access to requital mechanisms.     India can draw from these principles to draft its own

AI legislation — balancing technological invention with indigenous values and mortal quality.

DATA AND GRAPHICAL ANALYSIS: 

The impact of Artificial Intelligence on law and governance in India is increasingly evident through sectoral data, policy trends, and legal developments. While India has yet to establish a statutory regime for AI, various government departments and private entities are rapidly adopting AI-based systems. Below is a summary of trends and data, along with suggested visuals for Annexure I.

1.  AI Adoption in Government Sectors[19]

A report by NASSCOM (2023) and MeitY indicates that AI is being adopted across several public sector domains:

  • Law Enforcement & Surveillance – 32%
  • Healthcare Diagnostics – 24%
  • Agriculture Monitoring – 16%
  • Public Welfare Schemes – 14%
  • Judicial & Legal Analytics – 8%
  • Others – 6%

This distribution shows a heavy reliance on AI in critical decision-making sectors, where errors or bias can directly impact citizens’ rights.

2.  Rise in AI-Related Legal Disputes [20]

According to legal analytics platforms like SCC Online and Bar & Bench, litigation referencing algorithmic systems in India has grown significantly:

  • 2018 – 2 cases
  • 2019 – 5 cases
  • 2021 – 9 cases
  • 2023 – 17+

cases (mostly PILs involving facial recognition, biometric surveillance, and automated profiling)

This rise illustrates the judiciary’s increasing interaction with technology-driven governance and the urgent need for legal clarity.

3.  Global Readiness Index[21]

India ranked 32nd on the Oxford Government AI Readiness Index (2023), trailing behind:

  • USA (Rank 1)
  • UK (Rank 2)
  • Singapore (Rank 3)
  • EU nations (Average: Top 10)

India’s score was hampered by lack of legislation and ethical safeguards, despite high technical capability.

4.  Table: Comparative AI Legal Frameworks[22]

Country

Legal Framework

Transparency Mandate

Risk Classification

Human Oversight

EU

EU AI Act

Yes

Yes

Mandatory

USA

Algorithmic Accountability Bill

Yes (proposed)

Sectoral

Partially

Canada

Directive on Automated Decisions

Yes

Yes (Risk Matrix)

Mandatory

India

None (only policy-level)

No

No

No

 

ADVANTAGES AND CHALLENGES:

  1. Protection of Fundamental Rights

An accountability framework ensures that automated systems operate in compliance with constitutional mandates. Article 14 (equality) and Article 21 (life and personal liberty) can be meaningfully enforced if algorithms are subject to scrutiny and due process.

  1. Increased Public Trust

When the public is assured that decisions—whether in law enforcement, financial inclusion, or welfare delivery—are free from discrimination or unexplained bias, confidence in digital governance increases.

  1. Global Competitiveness

A clear legal framework would position India as a responsible AI power. It would improve investor confidence, promote ethical innovation, and strengthen India’s credibility in international AI diplomacy.

  1. Legal Certainty for Innovators

Start-ups and tech companies often fear arbitrary state action or legal ambiguity. Accountability mechanisms can establish clear roles, liabilities, and standards—creating a fair and predictable environment for developers and deployers.

  1. Ethical Governance

An accountability regime ensures human-in-the-loop mechanisms, grievance redressal systems, and data protection—all of which enhance the integrity and legitimacy of governance systems.

Challenges in Implementing AI Legal Frameworks: 

  1. Technological Complexity vs Legal Language

Laws often lag behind fast-evolving technologies. Legal drafters must understand AI architecture, including black-box models and deep learning systems, to frame enforceable rules.

  1. Jurisdictional Ambiguity

AI systems often operate across borders. Determining the place of harm or applicable jurisdiction becomes legally complex, especially when developers and deployers are in different countries.

  1. Lack of Institutional Readiness

India currently lacks a central AI regulatory authority or dedicated tribunal to address algorithmic harms. Building capacity in the judiciary and administration is a prerequisite.

  1. Privacy vs Innovation Dilemma

Striking a balance between safeguarding privacy and encouraging innovation is difficult. Over-regulation may stifle growth, while under-regulation may lead to abuse.

  1. Limited Public Awareness

Citizens often do not realize they are subject to algorithmic decision-making. Lack of awareness about rights and remedies weakens the efficacy of any accountability framework.

CONCULSION: 

Artificial Intelligence has become an integral part of modern governance and public administration in India. Its use is expanding rapidly in sensitive domains such as law

enforcement, finance, health care, and welfare delivery. While this shift holds enormous potential for efficiency and innovation, it also introduces unprecedented risks to constitutional values and fundamental rights. The absence of a legal framework tailored to regulate AI raises pressing concerns about transparency, fairness, and accountability.

This research has established that the current Indian legal system—anchored in the Information Technology Act, the Digital Personal Data Protection Act, and constitutional jurisprudence—does not adequately address the challenges posed by algorithmic governance. Although courts have recognized rights to privacy, equality, and procedural fairness, they have not yet ruled directly on the legal responsibility of AI systems or their creators.

Comparisons with global frameworks such as the EU AI Act and the U.S. Algorithmic Accountability Bill reveal that India is lagging behind in creating enforceable standards for AI deployment.

Moving forward, India must adopt a rights-based, risk-sensitive, and innovation-friendly approach. A comprehensive AI Accountability Act should define the responsibilities of developers, data processors, and users of AI systems. It should incorporate transparency mandates, impact assessments, audit mechanisms, and grievance redressal processes.

Furthermore, public awareness and institutional capacity must be enhanced to ensure meaningful compliance.

In conclusion, AI must not be allowed to function in a legal vacuum. Law and technology must evolve together to preserve the core values of justice, liberty, and dignity in the digital era. Algorithmic systems must be held to the same standards of accountability as any other form of public decision-making in a constitutional democracy.

REFERENCES:

[1] European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.

[2] Algorithmic Accountability Act 2022, S.3572, 117th Congress (US Senate).

[3] Information Technology Act 2000, No 21 of 2000 (India).

[4] Digital Personal Data Protection Act 2023, No 22 of 2023 (India).

[5] European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.

[6] Algorithmic Accountability Act 2022, S.3572, 117th Congress (US Senate).

[7] NITI Aayog, ‘Responsible AI for All: Strategy Paper’ (Government of India, 2021) https://niti.gov.in/files/responsible-AI-strategy-paper.pdf accessed 6 July 2025.

[8] Justice KS Puttaswamy v Union of India (2017) 10 SCC 1.

[9] Justice KS Puttaswamy v Union of India (2017) 10 SCC 1.

[10] Information Technology Act 2000, No 21 of 2000 (India).

[11] Digital Personal Data Protection Act 2023, No 22 of 2023 (India).

[12] NITI Aayog, ‘Responsible AI for All: Strategy Paper’ (Government of India, 2021) https://niti.gov.in/files/responsible-AI-strategy-paper.pdf accessed 6 July 2025.

[13] Justice KS Puttaswamy v Union of India (2017) 10 SCC 1.

[14] Anuradha Bhasin v Union of India (2020) 3 SCC 637.

[15] European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.

[16] Algorithmic Accountability Act 2022, S.3572, 117th Congress (US Senate).

[17] Government of Canada, ‘Directive on Automated Decision-Making’ (2019) https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592 accessed 6 July 2025.

[18] Organisation for Economic Co-operation and Development, ‘OECD Principles on Artificial Intelligence’ (2019).

[19] NASSCOM and Ministry of Electronics and Information Technology, AI Adoption and Public Sector Readiness in India 2023 (Government of India, 2023) https://meity.gov.in accessed 6 July 2025.

[20] SCC Online, ‘Algorithmic Profiling and Judicial References – A Timeline Analysis’ (SCC Online, 2024) https://www.scconline.com accessed 6 July 2025.
Bar & Bench, ‘Rise in Litigation Involving AI Systems: A Trend Review’ (Bar & Bench, 2024) https://www.barandbench.com accessed 6 July 2025.

[21] Oxford Insights, AI Readiness Index 2023: Country Ranking (Oxford Insights, 2023) https://ai-oxfordinsights.com accessed 6 July 2025.

[22] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) COM (2021) 206 final.
Algorithmic Accountability Act 2022, S.3572, 117th Congress (US Senate).
Government of Canada, Directive on Automated Decision-Making (2019) https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592 accessed 6 July 2025.
NITI Aayog, Responsible AI for All: Strategy Paper (Government of India, 2021) https://niti.gov.in/files/responsible-AI-strategy-paper.pdf accessed 6 July 2025.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top