BRIDGING THE GOVERNANCE GAP: A HUMAN RIGHTS-BASED FRAMEWORK FOR GLOBAL ARTIFICIAL INTELLIGENCE REGULATION

Published on: 27th December 2025

Authored by: Shreya Das
KIIT School of Law

ABSTRACT

In the 21st century, Artificial Intelligence (AI) has emerged as the defining technology, reshaping economics, governance, and human interaction. However, due to the lack of legal and ethical frameworks, AI advancement has outpaced the regulations, leading to a fragmented global governance landscape. This paper highlights the rising gap between technological advancements and their corresponding regulatory oversight, highlighting the concerns of how an absence of a unified legal framework can undermine human fundamental rights. The study reveals the lack of interoperability by analyzing major regional models such as the European Union’s risk-based AI Act, the U.S. innovation-driven strategies, China’s state-centric framework, and India’s evolving regulations. It demands an international body, anchored in principles of ethics, privacy, and equality. Practical mechanisms like Human Rights Impact Assessments (HRIAs), Global AI Charter, and a rights-based global framework focusing on human rights impact assessments, transparency commitment, and cross-border collaboration are necessary to guarantee that the framework stays in line with justice, inclusivity, and equality. Human dignity and ethical responsibility should lie at the heart of AI governance and enhance the developmental aspects of AI while ensuring the protection of fundamental rights, a coherent framework for justice, and scope for interpersonal development across all sectors.

KEYWORDS

ARTIFICIAL INTELLIGENCE, HUMAN RIGHTS, GLOBAL GOVERNANCE, PRIVACY, DIGITAL ETHICS, ACCOUNTABILITY]

INTRODUCTION

Artificial Intelligence has emerged as the Industrial Revolution (IR) 4.0, reshaping the global economy, public administration, and social interaction. AI has achieved the title of being one of the most transformative forces of the 21st century.[1] AI’s autonomous decision-making, predictive analytics, and data-driven precision have spread its unparalleled abilities and innovation across sectors ranging from healthcare and education to criminal justice and legal governance. Just as a coin has two sides to it, likewise, AI has been a force of rapid technological advancement, but has outpaced the development of corresponding legal and ethical safeguards. This contrast has caused a governance vacuum and resulted in fragmented national regulations and inconsistent accountability standards.[2] There has been an interconnection between AI and the fundamental rights of humans. It has arisen as a dominant force of influence on fundamental rights, from privacy and equality to freedom of expression. This article states that there is an urgent demand for a coherent governance framework and argues that human rights must serve as the normative foundation of global artificial intelligence.[3] It should ensure legitimacy, fairness, and accountability across jurisdictions. AI would destroy the very dignity that law seeks to protect if the universal principles are not protected by the same. 

THE GOVERNANCE GAP IN GLOBAL AI REGULATION

Despite AI being a revolutionary step, the regulatory framework in the global space remains deeply fragmented. Till now, there is not a single binding international framework or organization that would govern the ethical design, deployment, or accountability of AI systems.[4] However, different states and regional bodies have adopted frameworks reflecting their own political, economic, and cultural priorities. To support this statement, the European Union’s Artificial Intelligence Act has imposed strict obligations on high-risk systems to safeguard fundamental rights, whereas the United States has opted for a market-oriented framework by voluntary standards and sector-specific guidance.[5] India has introduced the Digital India Act to promote innovation and ensure data sovereignty.[6] China, on the other hand, emphasizes national security and proposes ideas such as algorithmic control and state oversight.[7]  This divergence has created a governance gap, weakening global accountability and enabling regulatory arbitrage. The absence of global coherence has risked AI developing into a competition of norms rather than a collaboration of justice, which not only undermines trust but also poses a threat to the universality of human rights in the digital age. A paper focusing on the Fundamental Rights (FRA 2018) from the EU Agency has discussed the possibility of discrimination against individuals through algorithms and stated that “The principle of non-discrimination, as discussed in Article 21 of the Charter of Fundamental Rights of the European Union, needs to be taken into account when applying algorithms to everyday life.”[8]

HUMAN RIGHTS AS A UNIVERSAL ACTOR

International Human Rights Law (IHRL) has provided different comprehensive normative foundations for the governance of emerging technologies like Artificial Intelligence (AI), such as the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and the International Covenant on Economic, Social, and Cultural Rights (ICESCR). These organizations have provided principles and rules that remain central to the ethical and legal management of AI and ensure the inherent dignity, equality, and freedoms of all individuals.[9] AI systems constantly engage with rights that are safeguarded under these frameworks. E.g., intrusive surveillance technology and opaque data gathering pose a threat to the right to privacy and data protection, guaranteed under article 12 of the UDHR and article 17 of the ICCPR.[10] Further, Article 7 of the UDHR, which ensures the guarantee of equality, and Article 19, which safeguards freedoms of thought and expression, are contravened due to algorithmic bias, discrimination, automated moderation, and recommendation systems.[11] There is an urgent need to embed human rights obligations within AI governance through mandatory impact assessments, transparency norms, and avenues for redress. This will amplify the transformation of abstract ethical principles into enforceable legal duties, thus establishing a coherent framework for both domestic and international AI policy.[12]

REGIONAL APPROACHES AND LESSONS FOR GLOBAL POLICY

Current approaches to Artificial Intelligence (AI) governance reflect the priorities and ideologies of different nations and states. One of the most comprehensive legislative models is the European Union’s Artificial Intelligence Act (AI Act), which introduced a risk-based framework for classifying AI systems according to their potential harm to fundamental rights. The European Parliament has taken significant steps to achieve progress in the field of Artificial Intelligence, such as passing several resolutions addressing various issues and aspects of AI concerning its use in criminal justice, education, culture, and the audiovisual sector.[13] The resolution also aims at establishing an ethical framework for the development, use, and deployment of AI. Despite its mandatory strategy for transparency, accountability, and human oversight, its jurisdictional reach remains geographically confined, raising concerns over its universality.

In the United States, regulation of AI is mainly tilted towards decentralized governance, focusing on sector-specific regulations and voluntary initiatives by private entities. It is primarily guided by policy directions from the White House and regulatory frameworks developed by federal agencies, unlike the European Union’s comprehensive legislative framework. While the Blueprint for an AI Bill of Rights and the 2023 Executive Order promote flexibility and technological leadership, they risk insufficient accountability and uneven protection of individual rights.[14]

In contrast, China has set its road to become the world’s earliest and most detailed regulatory framework. Its emerging governance framework will set an example about how the technology is built and used within China and internationally through effective governing measures, such as regulating recommendation algorithms, the most omnipresent form of AI on the internet. They have established a reusable regulatory strategy as well, which will serve as a foundational framework for the development of future regulations. However, these regulations need a careful analysis of how they will affect China’s AI trajectory and address significant concerns regarding privacy and freedom of expression.[15]

India has ranked among the top 10 countries attracting AI investment and hosting a rapid number of startups across distinct sectors, which reflects India’s fast-growing advancement of AI. However, unlike the European Union, which has a comprehensive AI act, India currently lacks a separate legislation addressing artificial intelligence. There have been some initiatives under Digital India and the proposed Digital India Act, which aims for data sovereignty, inclusion, and citizen-centric governance. India needs to address the regulatory fragmentation and lack of interoperability to establish a framework rooted in human rights principles.[16]

TOWARDS A RIGHTS-BASED GLOBAL AI FRAMEWORK

This technological reform has exposed the fragmented mosaic of regional and national regulations that calls for an urgent need for a central regulatory body to bridge the global governance gap and establish a coherent rights-based framework that reflects accountability, transparency, and inclusivity. States and corporations must provide relevant requirements, disclosure of training datasets, and independent auditing to achieve a transparent and accountable governance framework and prevent algorithmic bias, and ensure redress where harm occurs.

Another important aspect is cross-border cooperation, as without collaborative governance, unilateral regulation would be ineffective. A truly legitimate global framework must work upon ethics, audit mechanisms, and remedies for transnational AI harms, ensuring meaningful participation from the global south in norm-setting and implementation. Protecting human dignity in the digital domain requires a rights-based international approach through legislation and policies.

Embedding human rights within AI governance is not merely a moral aspiration but a legal necessity to achieve transparency, inclusivity, and accountability. The legitimacy of AI governance depends on its capacity to serve humanity rather than dominate it. AI should not be a ground for threat but should be a powerful tool for empowerment, which should be grounded in human rights and not exclusion, bridging the gap with justice at its heart. Ensuring dignity, fairness, and justice while regulating the AI domain can help humanity transform technology from a potential threat into a force for collective empowerment.

[1] Michael Cheng- Tek Tai, ‘The Impact of Artificial Intelligence on Human Society and Bioethics’ (2020) 32(4) Tzu Chi Medical Journal 339 https://pmc.ncbi.nlm.nih.gov/articles/PMC7605294/ accessed 8 November 2025

[2] UNESCO, Recommendation on the Ethics of Artificial Intelligence (Adopted 23 Nov 2021) https://www.unesco.org/en/artificial-intelligence/recommendation-ethics accessed 8 November

[3] Office of the United Nations High Commissioner for Human Rights, The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights (UN Doc A/HRC/39/29, Geneva, 3 August 2018) https://www.ohchr.org/Documents/Issues/DigitalAge/ReportPrivacyinDigitalAge/A_HRC_39_29_EN.pdf accessed 8 November 2025

[4] United Nations, Roadmap for Digital Cooperation (2020) https://www.un.org/digital-emerging-technologies/content/roadmap-digital-cooperation accessed 8 November 2025

[5] Anecdotes A.I, ‘AI Regulations in 2025: US, UK, Japan, China & More’ (Anecdotes AI, 29 October 2025) https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more accessed 8 November 2025

[6] Sanhita, ‘Explained: The Digital India Act 2023’ (Vidhi Centre for Legal Policy, 8 August 2023) https://vidhilegalpolicy.in/blog/explained-the-digital-india-act-2023/ accessed 8 November 2025

[7] Rogier Creemers, ‘China’s Social Credit System: An Evolving Practice of Control’ (SSRN Scholarly Paper No 3175792, 9 May 2018) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3175792 accessed 8 November 2025

[8] Rowena Rodrigues, ‘Legal and human rights issues of AI: Gaps, challenges and vulnerabilities’ (2020) Journal of Responsible Technology https://www.sciencedirect.com/science/article/pii/S2666659620300056 accessed 8 November 2025

[9] United Nations, Universal Declaration of Human Rights (adopted 10 December 1948) UNGA Res 217 A (III); International Covenant on Civil and Political Rights (adopted 16 December 1966, entered into force 23 March 1976) 999 UNTS 171; International Covenant on Economic, Social and Cultural Rights (adopted 16 December 1966, entered into force 3 January 1976) 993 UNTS 3

[10] United Nations Human Rights Council, The Right to Privacy in the Digital Age (A/HRC/39/29, 2018)

[11] Rikke Frank Jørgensen (ed), Human Rights in the Age of Platforms (MIT Press 2019)

[12] Dunstan Allison-Hope and Mark Hodge, ‘A Human Righrs-Based Approach to Artificial Intelligence’ (BSR, 28 August 2018) https://www.bsr.org/en/blog/human-rights-based-approach-to-artificial-intelligence accessed 8 November 2025

[13] European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM (2021) 206 final, 2021/0106(COD) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 accessed 9 November 2025

[14] Tatevik Davtyan, ‘The U.S. Approach to AI Regulation: Federal Laws, Policies, and Strategies Explained’ (SSRN Scholarly Paper No 4954290, 9 September 2024) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4954290 accessed 9 November 2025

[15] Carnegie Endowment for International Peace, China’s AI Regulations and How They Get Made (Matt Sheehan, 10 July 2023) https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en accessed 10 November 2025

[16] Swaraj Pandey, ‘Regulation of Artificial Intelligence in India: Legal challenges and the road ahead’ (SSRN Scholarly Paper No 5358959, 20 July 2025) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5358959 accessed 9 November 202

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top