Published On: August 25th 2025
Authored By: Darwin Sahay S
Government Law College, Ramanathapuram
Abstract
The exponential rise of Artificial Intelligence (AI) has introduced both promise and peril into the fabric of modern society. While AI enhances efficiency across sectors like healthcare, governance, and education, its unregulated deployment raises serious concerns about individual rights and freedoms. This article critically examines how AI interacts with core human rights principles, focusing on issues of privacy, equality, freedom of expression, and due process. Through global case studies and legal frameworks including the European Union’s AI Act and India’s constitutional jurisprudence the paper argues for a governance model that aligns technological innovation with human rights protection.[1] It advocates for legally binding international norms, ethical corporate practices, and participatory policymaking to ensure AI is used responsibly and equitably. The article concludes that striking a global balance is not just a legal necessity but a moral imperative in the age of intelligent machines.
Introduction
Artificial Intelligence (AI) has swiftly evolved from a scientific novelty to a pivotal technology shaping contemporary life. From predictive algorithms and facial recognition to automated decision-making and large language models, AI systems are now influencing diverse sectors such as healthcare, criminal justice, social media, public administration, and finance. While AI offers unprecedented opportunities to enhance human development, productivity, and innovation, it also raises serious human rights concerns, particularly in the absence of adequate legal and ethical frameworks.
The global legal community now faces a critical question: how can we regulate the development and use of AI technologies without stifling innovation, while simultaneously safeguarding fundamental human rights? This paper explores the dual-edged nature of AI, its impact on human rights, and the pressing need for a human rights-centric approach to global AI governance.
The Promises and Perils of AI in the Human Rights Context
AI holds transformative potential for advancing human rights. In healthcare, AI-powered tools are improving diagnostics, personalizing treatments, and expanding access to services.[2] In education, adaptive learning platforms enhance student engagement and help bridge learning gaps.
Governments use AI for social welfare distribution, fraud detection, and public service optimization.[3] These innovations, when appropriately implemented, support the right to health, education, and development.
However, AI also poses significant risks. Without transparency, oversight, and accountability, AI systems can undermine privacy, equality, freedom of expression, and access to justice. AI is often deployed in opaque ways trained on biased data, programmed with flawed assumptions, and used without clear mechanisms for redress. These structural flaws, left unchecked, can reinforce systemic discrimination and institutional bias.
A common example is algorithmic decision-making in employment, where candidates are screened based on historical data that may reflect discriminatory practices, inadvertently excluding marginalized groups.[4] Facial recognition software has also shown higher error rates for women and people of colour, leading to wrongful arrests and invasive surveillance.[5]
Key Human Rights Affected by AI
1. Right to Privacy
The right to privacy, enshrined in Article 17 of the International Covenant on Civil and Political Rights (ICCPR), is under significant threat from AI systems that enable mass surveillance.[6] Facial recognition, biometric identification, and data scraping tools collect and process sensitive personal data without consent. In China, AI is central to the state’s surveillance apparatus, where facial recognition and behaviour prediction tools are used to monitor ethnic minorities.[7]
In liberal democracies, too, government agencies and corporations employ AI to collect metadata, track online behaviour, and predict individual actions all in ways that often bypass traditional legal safeguards. Without robust data protection laws and transparent consent mechanisms, individuals lose control over their personal information.
2. Freedom of Expression
AI-powered content moderation tools are widely used on platforms like Facebook, X (Twitter), and YouTube to remove hate speech, misinformation, or violent content. While this serves a legitimate aim, it also raises concerns about over-censorship. Algorithms lack contextual understanding and often silence legitimate dissent or satire.[8] Automated systems disproportionately remove content from vulnerable or minority voices, chilling free expression and debate.
Further, opaque recommendation algorithms influence public discourse by shaping the visibility and reach of information. This lack of transparency undermines democratic engagement and can exacerbate polarization.
3. Right to Equality and Non-Discrimination
One of the most troubling dimensions of AI is its capacity to reinforce and magnify social inequalities. Algorithmic bias stems from historical data that reflects prejudices in society.[9] AI systems used in credit scoring, hiring, and criminal sentencing have produced racially discriminatory outcomes in multiple jurisdictions.
For instance, in the United States, the COMPAS risk assessment tool used to predict recidivism was found to be biased against African-American defendants.[10] Similarly, Amazon discontinued its AI recruitment tool when it was discovered to disadvantage female applicants due to training on male-dominated data.[11]
Without mechanisms to audit and correct for such bias, AI undermines the principle of equality before the law and perpetuates systemic injustice.
4. Right to a Fair Trial and Access to Justice
The use of AI in judicial systems raises significant procedural fairness concerns. Risk-assessment algorithms influence decisions on bail, parole, and sentencing. Yet, these systems often operate as “black boxes,” with neither the defendant nor their counsel able to interrogate the reasoning behind the algorithm’s recommendation.
In Loomis v Wisconsin, the defendant argued that the sentencing court’s use of the COMPAS tool violated his right to due process.[12] The court rejected this claim, setting a troubling precedent for the unchecked use of opaque AI in justice systems.
Moreover, access to justice is hindered when individuals lack awareness of how AI decisions are made or have no effective recourse to challenge them. Without algorithmic explainability, the legal principle of “audi alteram partem” (hear the other side) is severely compromised.
Towards Global AI Governance Anchored in Human Rights
1. International Legal Norms
International law must evolve to address the unique challenges posed by AI. While instruments like the Universal Declaration of Human Rights (UDHR) and the ICCPR remain applicable, new norms are needed to address issues like algorithmic accountability, data sovereignty, and digital personhood.
The European Union’s proposed Artificial Intelligence Act (2021) is the most advanced legislative model so far, classifying AI systems into risk categories and imposing compliance requirements accordingly.[13] The OECD, UNESCO, and the Council of Europe have also issued soft law instruments, such as guidelines and ethical principles.[14] However, these are non-binding and often lack enforcement mechanisms.
The UN High Commissioner for Human Rights has called for a moratorium on the sale and use of AI systems that pose serious risks to human rights until adequate safeguards are in place.[15] This reflects growing consensus on the need for international cooperation, ideally culminating in a binding global treaty on AI and human rights.
2. National Legislation and Institutional Oversight
National governments must incorporate international principles into domestic law. Comprehensive data protection legislation, algorithmic impact assessments, and independent oversight bodies are essential. For instance, Canada’s proposed Artificial Intelligence and Data Act (AIDA) mandates transparency and risk mitigation.[16]
In India, the Supreme Court’s landmark ruling in Justice K.S. Puttaswamy v Union of India recognized the right to privacy as a fundamental right under Article 21 of the Constitution.[17] However, India’s Data Protection Bill has faced repeated delays and criticisms for its wide-ranging exemptions to state agencies.[18] India’s growing use of AI in digital welfare systems like Aadhaar and predictive policing must be balanced with enforceable legal protections.
Independent institutions such as data protection authorities and digital rights ombudspersons should be empowered to investigate abuses and enforce compliance.
3. Corporate Due Diligence and Ethical Design
Private tech companies play a central role in developing and deploying AI technologies. The UN Guiding Principles on Business and Human Rights require businesses to respect human rights by conducting due diligence and offering remedies.[19] Tech firms must go beyond profit motives and embed ethics into system design through bias audits, human-in-the-loop systems, and algorithmic transparency.
Initiatives like Microsoft’s “Responsible AI” and Google’s AI Principles are voluntary efforts in this direction, but independent monitoring is needed to ensure genuine accountability.
4. Public Participation and Digital Literacy
AI governance must be inclusive and democratic. Affected communities, especially marginalised groups, must have a voice in the development of AI regulations. Citizen engagement, civil society activism, and participatory policymaking can help bridge the gap between technical decisions and social realities.
Moreover, digital literacy campaigns are essential to empower individuals to understand how AI affects their rights and how they can seek redress. This includes awareness of data collection practices, algorithmic profiling, and privacy tools.
Conclusion
AI is not inherently detrimental to human rights; rather, it is the way these technologies are conceived, built, and deployed that determines their impact. Striking a balance between innovation and human dignity is one of the defining challenges of this century. We need robust legal frameworks, ethical oversight, and inclusive governance models to ensure AI serves as a tool for empowerment not oppression.
The global community must rise to the occasion by shaping a future where technological advancement is guided by human rights principles. Only then can we truly claim to have created an AI ecosystem that reflects our shared values of justice, equality, and freedom.
References
[1] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) COM (2021) 206 final
[2] World Health Organization, Ethics and Governance of Artificial Intelligence for Health (WHO 2021).
[3] United Nations, ‘Digital Government Review of Brazil’ (2020) https://www.oecd.org/gov/digital-government-review-of-brazil.htm accessed 9 July 2025.
[4] Pauline Kim, ‘Data-Driven Discrimination at Work’ (2017) 58 Wm & Mary L Rev 857.
[5] Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (2018) 81 Proceedings of Machine Learning Research 1.
[6] International Covenant on Civil and Political Rights (adopted 16 December 1966, entered into force 23 March 1976) 999 UNTS 171 (ICCPR) art 17.
[7] Human Rights Watch, ‘China’s Algorithms of Repression’ (2019) https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression accessed 9 July 2025.
[8] David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (A/73/348, 2018).
[9] Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated’ (2021) 41(1) Ethics & Information Technology 41.
[10] Julia Angwin and others, ‘Machine Bias’ ProPublica (23 May 2016) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing accessed 9 July 2025.
[11] Jeffrey Dastin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women’ Reuters (10 October 2018).
[12] State v Loomis 881 N.W.2d 749 (Wis. 2016).
[13] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) COM (2021) 206 final.
[14] UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).
[15] OHCHR, ‘The Right to Privacy in the Digital Age’ (2021) A/HRC/48/31.
[16] Government of Canada, Artificial Intelligence and Data Act (Bill C-27, 2022).
[17] Justice K.S. Puttaswamy (Retd) v Union of India (2017) 10 SCC 1.
[18] Pranesh Prakash, ‘India’s Draft Data Protection Law: A Legal Analysis’ (2021) Centre for Internet and Society.
[19] UNHRC, Guiding Principles on Business and Human Rights (UN Doc HR/PUB/11/04, 2011).