Artificial Intelligence and Human Rights – Striking a Balance in Global Governance

Published On: August 23rd 2025

Authored By: Bhavika Bansal
Kurukshehtra University

Abstract

The way governments operate, how people are observed, how healthcare is provided, and how significant choices are made are all being altered by artificial intelligence (AI), which is continuing to transform societies. These modifications raise significant issues about human rights protection. This article discusses the relationship and conflicts between artificial intelligence (AI) and the protection of human rights like privacy, non-discrimination, and freedom of expression by analysing how new technologies can both promote and violate basic liberties and using international case studies including developments in the global countries.

It delves into how different regulatory strategies lead to issues with global governance, with some countries enforcing stringent ethical standards while others prioritize AI development over human rights protections. In order to guarantee that AI development is in line with universal human values, the article highlights the necessity of inclusive and rights-based AI frameworks and suggests international collaboration, legislative changes, and ethical monitoring procedures. It promotes a global governance paradigm that preserves human dignity in the era of intelligent programmes by finding a balance between innovation and protection.

Introduction

The world is changing at an unprecedented scale due to artificial intelligence (AI). Programmes that replicate human intellect, including learning, reasoning, and self-correction, are referred to as artificial intelligence (AI) like language models, surveillance software, and tools for making decisions in industries like healthcare, law, and finance. It is reinventing how people engage with technology, from chatbots to facial recognition and healthcare diagnostics to predictive policing.

But while as AI promises creativity and efficiency, it also brings up important moral and legal issues, especially with regard to human rights. Human rights are the essential liberties and privileges that every person has, irrespective of their gender, race, nationality, or origin which are protected by the Universal Declaration of Human Rights (UDHR) that includes equality, freedom of speech, privacy, and anti-discrimination protection.

AI’s effect over fundamental liberties is growing along with its reach, necessitating aggressive regulation and oversight through algorithmic content filtering, law enforcement monitoring, or employment decisions affecting people’s lives. Governments everywhere are putting in endless efforts to develop data protection legislation and regulatory frameworks that strike a balance between the potential of artificial intelligence and the upholding of human dignity.

However, there are many challenges in this task. States struggle to draw boundaries that uphold both innovation and rights, even as they seek to give citizens access to AI and permit its ethical application. This raises a central question: What role should global governance play in ensuring that AI development respects and protects human rights?

AI and Human Rights Across Different Countries

AI is making a significant impact both positively and negatively on human rights in different countries.

Positive Impact:

It has the potential not only to streamline processes but also to promote fundamental human rights only if implemented ethically. Let me use some real-world examples where it clearly fosters human rights in the areas of health, safety, and accessibility.

  1. DeepMind’s AlphaFold – Right to Health

DeepMind’s AlphaFold is an AI system that supports the right to health as stated in Article 25 of the UDHR and has revealed millions of 3D protein structures from amino acid sequences and helped in discovering a set of antibiotics for more efficiency. Proteins have unique complex 3D structure which will take several years to figure out and huge amount of money. This AI systems have solved this problem with its ability to predict protein structure within minutes. This helped researchers understand what individual proteins do and their interaction with other molecules. [1]

  1. Similie AI – UNICEF for Disaster Response – Right to Safety

UNICEF, in partnership with Microsoft and academic institutions, has deployed AI-powered disaster early warning systems across Southeast Asia and Africa to uphold the right to life and safety (Article 3, UDHR). For instance, Similie is a Timor-Leste startup backed by UNICEF’s Climate Action Cohort that uses low-cost IoT sensors and LSTM-based AI to forecast floods in real time, sending alerts via SMS, email, and sirens. Since 2021, it has issued eight warnings, protecting over 300,000 people. Tools like the Mekong Commission’s “One Mekong” app combine satellite, weather, and local data to deliver hyper-local forecasts up to seven days in advance. These AI systems are boosting disaster preparedness and directly advancing the right to safety. [2]

  1. Be My Eyes App – Accessibility and Inclusion

Be My Eyes and Smart Glass are an AI tools used by UNICEF and Microsoft that produces AI-powered image recognition and live video conversations to assist visually impaired people with daily tasks like reading labels, recognizing items, and navigating public areas. Meta’s Ray-Ban smart glasses offer hands-free help via voice commands (“Hey Meta, Be My Eyes”), while the GPT-4-powered “Virtual Volunteer™” provides instant AI-driven visual descriptions that can describe situations or things in real time. Together, these innovations are boosting digital independence, inclusion and accessibility under the Convention on the Rights of Persons with Disabilities. [3]

Negative Impact:

AI also poses grave risks to human rights when left unchecked. Here’s a glimpse into key harms through real-world examples:

  1. Netherlands SyRI Program – Socioeconomic Discrimination

The Dutch government used SyRI (System Risk Indication) to identify welfare fraud by connecting a wealth of information from low-income and immigrant-heavy neighbourhoods, including housing, work, tax, and family status. Critics cautioned that vulnerable populations were unfairly singled out by the opaque, black-box approach. The District Court of The Hague shut down SyRI in February 2020 stating that the company’s covert data methods endangered socioeconomic discrimination and infringed upon the right to privacy guaranteed by Article 8 of the European Convention on Human Rights.  Later, the tool was prohibited after legal challenges by civil society groups and trade unions, including the Federation of Dutch Trade Unions (FNV) and human rights organizations. [4]

  1. China’s Skynet – Mass Surveillance and Control Network

China has implemented a vast surveillance network “Skynet” with over 20 million CCTV cameras that kept an eye on its population through behavioural tracking, real time facial recognition, monitoring of public spaces, targeting dissent, and controlling ethnic minorities like the Uighurs in Xinjiang China. Launched in a 2014 plan aiming for nationwide coverage by 2020, the system aggregates government and private data to assess “trustworthiness.” People with poor social credit scores are subject to severe limitations, including being denied access to high-speed rail, prohibited from flying (11+ million train, ~17 million flight bans by 2018 – 2019), and face restrictions on jobs, housing, loans, and schooling. The rights to privacy, freedom of movement, and freedom of expression are among the core human rights that are violated by these actions. Serious questions concerning the system’s effect on individual freedoms and civil liberties are raised by its lack of accountability and transparency. [5]

  1. Clearview AI – Facial Recognition and Privacy

Clearview AI is a facial recognition business based in the United States that scraped more than 30 billion photos from internet including social media to create a global biometric database without users’ knowledge or consent. This activity broke the General Data Protection Regulation (GDPR) of the European Union and the Dutch Data Protection Authority fined Clearview AI €30.5 million (about $33.7 million) in September 2024 for these violations and also issued a warning that further fines up to €5.1 million for ongoing non-compliance. Despite this, Clearview AI has challenged the penalties, claiming that because it does not have a physical presence or clients in the area, it is exempt from EU legislation. [6]

Global Governance

As AI knows no borders, global governance is essential. While countries like India build domestic regulations, regional and international bodies are working to address AI’s ethical and legal impacts, ensuring it aligns with human rights and global standards.

Governance Domestically:

Countries are shaping AI governance to address local risks while promoting ethical use. In India, the upcoming Digital India Act and the National Strategy for AI focus on responsible innovation, privacy, and algorithmic accountability. For example, India’s AI-based facial recognition system at airports (DigiYatra) raises both convenience and privacy concerns, prompting debate over data regulation. Similarly, AI tools used in Indian courtrooms for case prioritization aim to reduce backlog but highlight the need for transparency and fairness. These domestic efforts reflect the growing push to align AI with constitutional rights and public trust.

Governance Internationally:

  1. European Union AI Act (2024)
    The first complete legal framework governing AI by risk category was the EU’s AI Act, which was implemented on 1st August, 2024. Real-time biometric surveillance and other “unacceptable risk” applications are prohibited, and high-risk AI systems must undergo Fundamental Rights Impact Assessments (FRIA). By encouraging accountability and transparency and including human rights assessments into the regulatory process, this strategy leads by example. [7]
  2. UNESCO AI Ethics Guidelines (2021)
    UNESCO’s Recommendation on the Ethics of AI adopted on 24 November 2021, calls on member states to regulate AI in accordance with international human rights standards. It places a strong emphasis on values like equity, non-discrimination, transparency, and human supervision. [8]
  3. Council of Europe Framework Convention on AI (2024)
    It is the first legally enforceable worldwide treaty for AI, having been adopted on 17 May 2024. It pledges more than 50 countries to create and implement AI in a way that respects democracy, the rule of law, and human rights. [9]
  4. Toronto Declaration (2018)
    The Toronto Declaration released by Amnesty International and Access Now issued in May 2018, calls for openness, impact assessments, and redress procedures while highlighting the importance of preserving equality and non-discrimination in machine learning systems. [10]
  5. UN Global Compact and AI Advisory Panel (2023-2024)
    Similar to the IPCC for climate change, the UN formed a high-level advisory committee in 2024 that called for an “international scientific panel” on artificial intelligence. Without regulation, AI might “deepen global inequalities” and turn into a divisive rather than a unifying force, UN Secretary-General António Guterres cautioned. [11]

Finding Equilibrium: Suggestions for Ethical AI Governance

 

AI governance is now governed by international political agreements and enforced regulations rather than theoretical conjecture and soft guidelines. From EU enforcement measures to UN-led international cooperation, today’s frameworks demonstrate a balance between innovation, ethics, safety, and human rights. An approach to governance that is balanced is necessary to uphold human rights and promote innovation. The following are important tactics:

  1. Explainability and Algorithmic Transparency– AI decision-making needs to be auditable and comprehensible. Developers ought to let people contest biased results and reveal how AI systems operate.
  2. International Cooperation and Treaty Structures – AI necessitates legally enforceable international agreements, just as climate change and nuclear arms control. Countries should take part in venues like the Global Partnership on AI (GPAI) and support efforts like the Framework Convention on AI.
  3. Assistance to Developing Countries– Many nations lack the legal or technological framework necessary to properly control AI. Equitable access to AI technologies and capacity-building must be supported by international partnerships.
  4. Mechanisms for Accountability and Redress-There must be legal channels for people to seek justice and remedies when AI systems injure them, whether through discrimination, data leaks, or incorrect conclusions.

Conclusion

If artificial intelligence is developed and regulated appropriately, it has the potential to improve societies. The stakes for human rights increase dramatically as AI systems become increasingly integrated into everyday life. There is still more to be done, even if governments, tech firms, and international organizations have started to take action. A coordinated worldwide strategy is required, one that protects human rights, encourages openness, and guarantees moral responsibility. Finding this equilibrium is a moral requirement as well as a legal and technological problem. AI must complement humans rather than replace or threaten it. We can utilize AI’s promise while upholding the principles that define humanity if the proper frameworks are in place.

References

[1] Olga Fink, Thomas Hartung, SangYupLee, Andrew Maynard, AI for scientific discovery, WORLD ECONOMIC FORUM, 25 june 2024, https://www.weforum.org/publications/top-10-emerging-technologies-2024/in-full/1-ai-for-scientific-discovery/.

[2] Oulayvanh Sisounonth, The Laotian Times, December 25, 2024, https://laotiantimes.com/2024/12/25/mekong-river-commission-launches-education-hub-one-mekong-app-in-vientiane/

[3]  Andrew Liszewski, May 15, 2025, https://www.theverge.com/news/667613/ray-ban-meta-smart-glasses-ai-detailed-responses-call-a-volunteer/?

[4] Katharine Viner (Editor-in-chief), The Guardian, Feb 2022, https://www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules/?

[5] Xiao Qiang, Issue1, Volume 30, Page Numbers 53-67, MLA (Modern Language Association 8th Edition), January 2019, https://www.journalofdemocracy.org/articles/the-road-to-digital-unfreedom-president-xis-surveillance-state/?utm_source=chatgpt.com

[6] Charlotte Van Campenhout, David Goodman and Mark Potter, Clearview AI fined by Dutch agency for facial recognition database, Reuters, September 4, https://www.reuters.com/technology/artificial-intelligence/clearview-ai-fined-by-dutch-agency-facial-recognition-database-2024-09-03/?

[7] Commission signs Council of Europe framework Convention on Artificial Intelligence, European Commission, 05 September 2024, https://digital-strategy.ec.europa.eu/en/news/commission-signs-council-europe-framework-convention-artificial-intelligence?

[8] Recommendation on the Ethics of Artificial Intelligence, 23 November 2021, https://www.unesco.org/en/legal-affairs/recommendation-ethics-artificial-intelligence?

[9] The Framework Convention on Artificial Intelligence, https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence?

[10] The Toronto Declaration, https://www.torontodeclaration.org/?

[11] UN chief appoints 39-member panel to advise on international governance of artificial intelligence, AP News, October 27, 2023, https://apnews.com/article/un-artificial-intelligence-international-governance-panel-6a72cafc8227bfef9abe3ebd75fc345d

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top