AI and Human Rights: Striking Balance in Global Governance

Published on: 22nd November 2025

Authored by: Shreya Mahesh Patil
ISMAILSAHEB MULLA LAW COLLEGE, SATARA

Abstract

The rapid growth of Artificial Intelligence (AI) presents both immense opportunities and significant risks. As AI systems increasingly influence areas like healthcare, law enforcement, employment, and civil administration, concerns about privacy, bias, surveillance, and autonomy have grown. At the core lies the potential clash between technological advancement and fundamental human rights. This article explores the evolving interface between AI and human rights within the context of global governance. It addresses the need for ethical frameworks, international regulatory cooperation, and jurisprudential responses that preserve human dignity while fostering innovation. Real-world cases from across jurisdictions are analyzed to demonstrate how legal systems are confronting the AI-human rights nexus.

Introduction

AI technologies are transforming the structure and function of modern societies. From predictive policing to automated hiring, AI’s influence transcends national borders. However, this rise also raises critical human rights issues—discrimination, surveillance, lack of accountability, and erosion of privacy, to name a few. In the absence of globally binding regulations, the challenge is twofold: how can states safeguard human rights while enabling AI innovation? And how can global governance mechanisms ensure that technological progress aligns with universally accepted human rights principles?

This article critically analyzes the interaction between AI and human rights, highlighting the necessity of a globally coordinated, human-centric governance model. It also examines real-life legal developments that demonstrate the evolving landscape.

AI: The Emerging Technological Power

AI refers to systems that simulate human intelligence to perform tasks such as learning, reasoning, and decision-making. These systems include machine learning, natural language processing, neural networks, and robotics.

In practice, AI has transformed sectors such as:

  • Healthcare: Diagnostic algorithms and various operating systems are being controlled through AI
  • Banking: Fraud detection systems operate with the help of AI
  • Criminal justice: Facial recognition and risk assessment serve as tools to ease investigation
  • Employment: Automated hiring systems facilitate quality shortlisting
  • Public administration: AI assists in welfare delivery

However, AI’s predictive nature relies heavily on data. Biased data or opaque algorithms can result in discriminatory outcomes, affecting people unequally based on race, gender, or socioeconomic status—raising serious human rights concerns.

Human Rights Framework: A Legal Overview

Human rights are inalienable entitlements owed to every individual by virtue of being human. These include:

  • Right to privacy (Article 12, UDHR; Article 17, ICCPR)
  • Right to equality and non-discrimination (Article 7, UDHR)
  • Right to freedom of expression (Article 19, UDHR)
  • Right to remedy and fair trial (Article 8, UDHR; Article 14, ICCPR)
  • Right to work and fair employment conditions (ICESCR)

The Interplay: AI and Human Rights

1. Right to Privacy vs. AI Surveillance

AI-powered surveillance tools like facial recognition and biometric scanners pose major threats to individual privacy. China’s Social Credit System and mass surveillance practices have sparked global debates.

Lopez Ribalda v. Spain (ECtHR, 2019) – Covert surveillance of employees without consent was held to violate the right to privacy under Article 8 of the European Convention on Human Rights.

Union for Civil Liberties v. Union of India (2017) – The Indian Supreme Court recognized privacy as a fundamental right in the context of Aadhaar (biometric ID program).

2. Non-Discrimination and Algorithmic Bias

AI systems have replicated and magnified societal biases. For example, Amazon’s AI recruiting tool was scrapped after it showed bias against female candidates.

State v. Loomis (2016, Wisconsin Supreme Court) – The use of the COMPAS algorithm in sentencing was challenged on the grounds that it could produce racially biased outcomes. The court allowed its use with caution, emphasizing transparency.

3. Right to Due Process and Automated Decision-Making

Decisions affecting rights (such as denial of welfare or asylum) should be just, reasoned, and appealable. AI systems that make opaque decisions often fail this test.

SyRI Case (Netherlands, 2020) – The Dutch government’s automated welfare fraud detection system was struck down by the court for violating privacy and data protection norms. It disproportionately targeted low-income and migrant communities.

4. Freedom of Expression and Content Moderation

AI-driven moderation on platforms like YouTube or Facebook may wrongfully censor content, affecting free speech.

Amnesty International reported that automated moderation disproportionately silences marginalized voices. The lack of human oversight results in wrongful takedowns.

Striking a Balance: Global Governance and Regulatory Efforts

To ensure AI aligns with human rights, global governance must be inclusive, rights-based, and accountable.

1. Existing Global Efforts

a. The Organisation for Economic Co-operation and Development (OECD) AI Principles (2019) advocate for human-centric AI systems grounded in transparency, accountability, and robustness, offering a voluntary framework embraced by numerous countries.

b. The United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence (2021) marks the first global normative instrument dedicated solely to AI ethics, emphasizing inclusion, gender equality, and the importance of data governance.

c. European Union (EU) Artificial Intelligence Act introduces a pioneering risk-based regulatory model, imposing strict compliance obligations on high-risk AI systems such as biometric surveillance and credit scoring tools.

d. India’s Digital Personal Data Protection Act, 2023 establishes rules for data processing and user consent but has been criticized for insufficient safeguards against invasive AI profiling and for granting broad exemptions to government entities.

These evolving regulatory landscapes underscore the need for coherent global cooperation that upholds human rights while fostering technological progress.

2. Key Elements for Balanced Governance

a. Transparency: Algorithms must be auditable and explainable to affected persons.

b. Accountability Mechanisms: Developers and deployers should be legally responsible for harms.

c. Non-Discrimination Safeguards: Mandatory bias audits and inclusive datasets are essential.

d. Right to Human Oversight: Automated decisions affecting rights must allow for human review.

e. International Harmonization: Treaties, standards, and global watchdog bodies are needed.

Notable Global Cases Involving AI and Human Rights

Case Jurisdiction Human Right Involved Outcome
SyRI Case Netherlands Privacy, Equality Government AI system struck down
Lopez v. Spain Europe (ECtHR) Privacy Employer surveillance unlawful
State v. Loomis USA Due process, Fair trial AI use allowed with caution
Aadhaar case India Privacy Data collection allowed with restrictions

1. Union for Civil Liberties v. Union of India

Citation: Justice K.S. Puttaswamy (Retd.) v. Union of India [2017] 10 SCC 1 (India)
Court: Supreme Court of India

Summary:
This landmark case declared the right to privacy as a fundamental right under Article 21 of the Indian Constitution. The judgment came in response to concerns over the Aadhaar biometric ID system and its implications on individual autonomy and data protection. The Court emphasized that any infringement on privacy must satisfy the test of legality, necessity, and proportionality.

Implication:
AI systems used in biometric surveillance and data analytics must now conform to this constitutional protection, especially in public administration and welfare delivery.

2. Lopez Ribalda and Others v. Spain

Citation: Lopez Ribalda and Others v. Spain [2019] ECHR 752
Court: European Court of Human Rights (Grand Chamber)

Summary:
Supermarket employees were monitored through covert video surveillance without being informed. The Court ruled that the employer’s action violated Article 8 of the European Convention on Human Rights (right to privacy), as it failed the test of proportionality and necessity.

Implication:
AI-based surveillance tools used in the workplace or public settings must adhere to privacy standards and employee consent, especially in Europe.

3. State v. Loomis

Citation: State v. Loomis 881 N.W.2d 749 (Wis. 2016)
Court: Wisconsin Supreme Court, United States

Summary:
The defendant challenged the use of the COMPAS algorithm in determining his sentence, arguing that it violated due process by relying on a proprietary (black-box) risk assessment system that might contain racial biases. The court upheld its use but mandated that judges be cautious and informed about the tool’s limitations.

Implication:
This case highlights judicial unease over opaque AI systems influencing criminal justice outcomes and underscores the need for transparency and accountability.

4. SyRI Case (Netherlands)

Citation: District Court of The Hague (5 February 2020), ECLI:NL:RBDHA:2020:865
Court: District Court of The Hague

Summary:
The Dutch government used an AI system called SyRI (System Risk Indication) to detect welfare fraud. The system disproportionately targeted low-income and migrant communities. The Court ruled that the use of SyRI violated Article 8 of the ECHR due to its lack of transparency and safeguards, effectively striking down the program.

Implication:
This case serves as a critical precedent for rejecting discriminatory and opaque AI tools in public governance.

5. Google Spain SL v. Agencia Española de Protección de Datos (AEPD)

Citation: Google Spain SL v. AEPD (Case C-131/12) EU:C:2014:317
Court: Court of Justice of the European Union (CJEU)

Summary:
The Court held that individuals have a “right to be forgotten” under the EU’s Data Protection Directive, allowing them to request search engines like Google to delist links to personal data that are inadequate, irrelevant, or no longer relevant.

Implication:
AI systems processing personal data must respect data minimization and erasure rights under European data protection law.

Challenges in Global Governance

  1. Jurisdictional Fragmentation: Different national laws on AI and data create regulatory gaps.
  2. Corporate Power vs. State Authority: Big Tech’s transnational nature limits state control.
  3. Algorithmic Opacity: Trade secrets and complex models reduce transparency.
  4. Digital Divide: The Global South lacks the legal capacity to regulate AI effectively.

A Human-Centric AI Ecosystem

A human-centric AI ecosystem requires a robust, legally grounded framework that prioritizes human rights, transparency, and accountability. As artificial intelligence becomes increasingly integrated into decision-making processes across sectors, there is an urgent need to establish a binding international treaty—ideally under the auspices of the United Nations—that mirrors the approach taken in global climate change agreements. Such a treaty should set enforceable standards for algorithmic governance, cross-border data ethics, and AI system accountability.

In parallel, national institutions such as Human Rights Commissions and Data Protection Authorities must be empowered with the legal authority and technical capacity to audit and monitor AI deployments, particularly in public services and high-risk domains. To ensure that individuals can effectively exercise their rights in the digital age, comprehensive efforts must also be made to promote digital literacy and education, equipping citizens with the knowledge to understand and challenge automated decisions.

Furthermore, the establishment of independent AI ethics boards—comprised of multidisciplinary experts and insulated from political and corporate influence—is essential to oversee algorithmic fairness, assess societal impacts, and provide transparent, ethical guidance. Together, these legal and institutional reforms form the foundation for an AI governance model that safeguards human dignity and democratic accountability in the age of automation.

Conclusion

AI, when guided by the right legal and ethical compass, can be a transformative force for good. But unregulated AI poses real threats to fundamental human rights. Global governance mechanisms must move beyond soft law and voluntary codes to binding, rights-based regulations. Advocates, policymakers, and technologists must collaboratively work toward a digital future where technological progress does not come at the cost of human dignity.

By anchoring AI innovation in the universality of human rights, we can ensure that the digital revolution becomes a force for empowerment, not exclusion.

References

  1. Justice K.S. Puttaswamy (Retd.) v. Union of India [2017] 10 SCC 1 (SC).
  2. Lopez Ribalda and Others v. Spain [2019] ECHR 752.
  3. State v. Loomis 881 N.W.2d 749 (Wis. 2016).
  4. District Court of The Hague (5 February 2020), ECLI:NL:RBDHA:2020:865.
  5. Google Spain SL v. Agencia Española de Protección de Datos (AEPD) (Case C-131/12) EU:C:2014:317.
  6. Universal Declaration of Human Rights, UNGA Res 217A (III), UN Doc A/810 (1948).
  7. OECD, Recommendation of the Council on Artificial Intelligence (22 May 2019) https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
  8. UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021) https://unesdoc.unesco.org/ark:/48223/pf0000381137.
  9. Amnesty International, Automated Censorship and Discrimination Online (2021) https://www.amnesty.org.
  10. Equality and Human Rights Commission, Unlocking the Power of AI (2020).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top