ARTIFICIAL INTELLIGENCE AND HUMAN RIGHTS: Striking a balance in Global Governance

Published On: August 17th 2025

Authored By: Christopher Benjamin Chukwuemeka
University of Lagos

Abstract

The rise of artificial intelligence (AI) has ushered in a technological revolution that touches nearly every aspect of human life—healthcare, finance, education, security, entertainment, and even legal decision-making. The spontaneous growth of Artificial Intelligence by technology has overtime aided its indubitable usefulness, contribution, and influence in human lives. This ranges from food, to clothing, to shelter, to education, to reproduction, and even the in the inherent rights of humans; Human rights.

While AI holds the potential to enhance human dignity and well-being, it also raises serious ethical, legal, and social concerns. These include threats to privacy, equality, freedom of expression, and even autonomy. In democratic societies, concern about the consequences of our growing reliance upon Artificial Intelligence is rising[1].

In the face of these developments, global governance mechanisms must adapt to ensure that AI technologies are developed and deployed in ways that respect and protect fundamental human rights. This article explores the interaction between artificial intelligence and human rights, critically examines global efforts at regulation and governance, and proposes pathways to achieve a balanced framework that supports both innovation and rights-based accountability.

Understanding Artificial Intelligence and Its Applications

Many definitions of A.I. have been offered, the first of which came in 1956 during the Dartmouth Summer Research Project on A.I. John McCarthy, one of the founding fathers of the discipline, defined “intelligent” any system capable of performing actions that would be qualified as intelligent if a human being accomplished them[2]. This therefore connotes that artificial intelligence is simply a machine or a system that is capable of performing any task a human being can perform. AI systems can be categorized broadly into three types: narrow AI (designed for specific tasks), general AI (human-like cognitive abilities), and superintelligent AI (hypothetical, surpassing human intelligence).

The application of AI is vast and growing. AI aids in diagnosis, treatment recommendations, and drug discovery. Artificially generated algorithms drive stock trading, credit scoring, and fraud detection. In the area of criminal justice, predictive policing and risk assessment tools influence policing and sentencing. AI also helps in welfare distribution, identity verification, and immigration control. Facial recognition, gait analysis, and biometric tracking are used by both democratic and authoritarian regimes. Digital technology in the twenty-first century has ushered in what some have called the “golden age of surveillance”—not only by states and corporations but also by non-state actors[3].

Essentially, the result of an operation performed by an intelligent system is not distinguishable from a process carried out by a human. Thus, the manifestation of Artificial Intelligence will inevitably influence human rights – positively or adversely.

Implications Of AI On Human Rights  

The concept of human rights has been treated with intense attention and as a delicate issue as it covers the very existence of every person, irrespective of geographical and cultural differences. Human rights was succinctly defined by Kayode Eso J.S.C (as he then was) in the case of Ransome Kuti & ORS v A.G Federation & ORS[4] as thus: Human rights “are rights that have always existed, even before orderliness prescribed rules for the manner they are to be sought. It is a primary condition to a civilized existence which stands above the ordinary laws of the land”. In every country, there is a usual mandatory inclusion of human rights in the law of that land.

AI impacts nearly all internationally recognized human rights, as enshrined in instruments like the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and regional frameworks like the African Charter on Human and Peoples’ Rights. One of the most important international laws that treat human right is the Universal Declaration of Human Rights, 1948. “Universal Declaration of Human Rights” (UDHR) is one of the important documents that declared fundamental rights for human and requested all of the states to protect these rights[5].

 Artificial intelligence interacts with human rights in the following ways :

  • Right to Privacy: AI systems often depend on the collection and processing of massive datasets, including personal and biometric data. Facial recognition technologies used for surveillance can track individuals without their consent, violating privacy rights and fostering a climate of fear. In countries with weak data protection laws, individuals may be unaware of how their data is collected, shared, or used.
  • Freedom of Expression and Access to Information: Social media platforms use AI to moderate content. While this helps combat misinformation, hate speech, and violence, it can also suppress legitimate dissent or minority views if not carefully calibrated. The opacity of content moderation algorithms means that individuals may be unable to challenge the removal or suppression of their posts—posing risks to free speech.
  • Equality and Non-Discrimination: AI systems trained on biased datasets can reinforce existing inequalities. In employment, credit, and criminal justice, algorithms have been shown to discriminate based on race, gender, or socioeconomic status. For instance, predictive policing tools have disproportionately targeted minority communities, leading to systemic injustices.
  • Right to Due Process and a Fair Trial: Automated decision-making in legal or administrative contexts may undermine due process rights. If individuals do not understand how decisions affecting their rights are made—or are unable to challenge them—fundamental legal protections are compromised.
  • Right to Work and Social Security: As automation replaces human labor in various sectors, many workers face displacement. This raises concerns about the right to work, fair wages, and access to social security. The transition to AI-based economies must ensure that affected populations are protected through retraining and social support systems.
  • Freedom of Assembly and Association: Surveillance technologies powered by AI can deter people from attending protests or joining activist groups. When governments monitor gatherings using drones and facial recognition, the chilling effect on civic engagement is significant.

Elementally, the designed structure of Artificial Intelligence taking decisions for man has generated many positions as to its influence of human rights. These positions are either positive or negative. Indubitably, aligning with the prospect of Artificial Intelligence, it evinces its contribution to the easement of every arm of life. In the creation of certain intelligent systems, the monitoring of human rights abuses have become easier; and consequently, the proof of human rights violation have become less rigorous in courtrooms.

Global Governance Approaches to AI and Human Rights

Addressing the human rights implications of AI requires international cooperation and a robust global governance framework. Several initiatives have emerged, though challenges remain. These initiatives will be explored below.

The United Nations has taken steps to recognize the intersection of AI and human rights. In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence, the first global standard-setting instrument on AI. It calls for human rights-based approaches, transparency, accountability, and bans on AI applications that pose a threat to human dignity (e.g., social scoring and mass surveillance). The UN Human Rights Council has also highlighted the importance of aligning AI development with human rights standards, urging member states to implement safeguards.

The Organisation for Economic Co-operation and Development (OECD) released principles promoting inclusive, sustainable, and human-centric AI. These principles have been endorsed by more than 40 countries and advocate for transparency, robustness, and accountability. 

The European Union has taken a regulatory lead with its proposed AI Act, which categorizes AI systems by risk. High-risk applications such as biometric identification, critical infrastructure, recruitment are subject to strict requirements. Certain uses, such as social scoring, are banned outright. The AI Act is grounded in the EU Charter of Fundamental Rights and aims to protect citizens without stifling innovation. The European Union AI Act establishes a comprehensive regulatory framework for artificial intelligence, focusing on safety, ethical use, and risk management.

In Africa, the governance of AI is still developing. The African Union’s Digital Transformation Strategy (2020–2030) encourages ethical AI adoption, but enforcement mechanisms are limited. Countries like Nigeria and Kenya have begun developing national AI strategies, yet these often lack strong human rights frameworks or stakeholder engagement.

Challenges in Striking the Balance

Despite progress, several challenges hinder effective governance of AI in relation to human rights:

  • Regulatory Gaps and Fragmentation: There is no universally binding treaty on AI and human rights. Different countries adopt different approaches—some emphasize innovation and market freedom, while others prioritize restrictions. This regulatory divergence can lead to “AI havens” where companies exploit weak oversight.
  • Lack of Transparency and Explainability: Many AI systems, especially those built on deep learning, operate as “black boxes”—their internal logic is inaccessible or incomprehensible. This opacity complicates accountability and judicial review.
  • Private Sector Dominance: A handful of large tech corporations control much of the world’s AI research and infrastructure. Their interests may not always align with public welfare or rights protections. Self-regulation often proves inadequate in the face of commercial pressures.
  • Capacity Gaps in the Global South: Many developing countries lack the legal, technical, and financial capacity to regulate AI effectively or participate meaningfully in global norm-setting. This raises concerns of digital colonialism, where AI systems developed in the Global North are exported without due regard to local contexts or rights.
  • Enforcement and Redress: Even where laws exist, enforcement is often weak. Victims of AI-related rights violations may struggle to obtain redress due to legal ambiguity, jurisdictional challenges, or the sheer complexity of AI systems.

Striking a Balance: Recommendations for Rights-Respecting AI Governance

  • Adopt Binding International Frameworks

Global efforts should aim toward a binding international convention on AI and human rights. This should draw on existing human rights treaties and establish minimum standards, similar to the GDPR for data protection.

  • Multi-Stakeholder Participation

Effective governance requires input from diverse stakeholders—governments, tech companies, civil society, academia, and marginalized communities. Inclusive consultation ensures policies are grounded in lived realities and public interest.

  • Prioritize Transparency and Accountability

Regulations should mandate that AI systems affecting rights be explainable, auditable, and open to scrutiny. Impact assessments and algorithmic audits should be routine, particularly in high-risk areas.

  • Promote Ethical AI Design and Deployment

Developers must embed human rights into the AI development process through ethical design principles, bias testing, and risk mitigation. Corporate social responsibility should extend to AI governance.

  • Build Capacity in the Global South

International organizations and donor countries should invest in building AI governance infrastructure in developing nations—supporting legal reforms, technical training, and public awareness.

  • Protect Labor Rights in the AI Economy

Governments must implement policies that anticipate and cushion the disruptive effects of automation. This includes lifelong learning, universal basic services, and social protection systems.

  • Ban High-Risk and Harmful Applications

Certain uses of AI, such as lethal autonomous weapons, predictive policing, or real-time facial recognition in public spaces, should be prohibited due to their disproportionate threat to rights.

Conclusion

Artificial intelligence is reshaping the contours of society, offering immense opportunities and equally serious risks. As this transformation unfolds, the imperative to safeguard human rights must remain central. Striking a balance between innovation and regulation, between private enterprise and public accountability, and between national sovereignty and global norms is not only a legal challenge—it is a moral one.

In achieving this balance, global governance must prioritize transparency, equity, and justice. AI must serve humanity, not the other way around. For that to happen, legal frameworks, institutions, and stakeholders must collaborate to ensure that the next chapter of technological progress is written with the ink of human dignity.

References

[1] Eileen, D. and Megan M.M. 2019. Artificial Intelligence and Human Rights. Journal of Democracy, Volume 30, Number 2, pp. 115-126.

[2] Cataleta, M.S. and Anna, C. 2020. Artificial Intelligence and Human Rights: An Unequal Struggle. CIFILE Journal of International Law Journal Vol. 1, No. 2, 40-63.

[3] Steven, L. and Mathias, R. 2019. The Future Impact of Artificial Intelligence on Humans and Human Rights. Ethics & International Affairs, 33, no. 2 pp. 141 – 158.

[4] (1985) 2 NWLR

[5] Ebad,  R., Leila, R.D. & Mahmoud, J.K, ‘Protection of Prisoner’s  Human Rights in Prisons through the Guidelines of Rule of Law’, Journal of Politics and      Law, Vol. 10, No. 1 (2017), <https://www.researchgate.net/publication/311972901_Protection_of_Prisoner%27s_Human_Rights_in_Prisons_through_the_Guidelines_of_Rule_of_Law> accessed 17 October 2021.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top