Artificial Intelligence and Human Rights: Striking a Balance in Global Governance

Published On: August 19th 2025

Authored By: Aishwarya Raosaheb Chakor
Savitribai Phule Pune University,Pune

Introduction

Artificial Intelligence (AI) is no longer advanced concept but it is deeply linked to modern culture. From virtual assistants and predictive tools to self-working drones and security systems, AI’s used in many things. It is growing very fast. But with this fast growth comes an important question: How do we protect human rights in the age of AI?

AI and human rights mixed together creates one of the toughest problems for everyone involved lawmakers, tech experts, ethicists, and judges. We need strong, worldwide rules to make sure we keep new tech alive without trampling on people’s basic rights.

AI affects so many parts of our lives privacy, freedom, fairness, that it’s not easy to manage just in one country. It needs global coordination that adapts as technology evolves. Experts from different fields need to team up combining legal know-how, tech skills, ethical insight, and human rights awareness to build AI systems that respect people’s dignity.

Human Rights Challenges of AI

Right to Privacy and Data Protection

AI uses a lot of our personal info like photos of our faces, where we go, and what we do online.

Tools like face-recognition cameras, GPS trackers, and monitoring social media mean people can be watched all the time.

When countries don’t have strong laws to protect data, governments or companies can:

  1. Track your movements,
  2. Build personal profiles on you,
  3. Predict what you might do next.

This takes away your privacy and makes you feel like you’re under constant surveillance which goes against Article 12 of the Universal Declaration of Human Rights (everyone has the right to privacy).

Algorithmic Discrimination and Bias

 AI systems learn from data that comes from the real world. But if that data is unfair or biased, the AI will also behave unfairly.

AI systems learn from data that comes from the real world. But if that data is unfair or biased, the AI will also behave unfairly. This means AI can: AI systems learn from data that comes from the real world. But if that data is unfair or biased, the AI will also behave unfairly.

  1. Give lower scores to certain people when they apply for loans,
  2. Reject job applications unfairly,
  3. Treat people unequally in policing or healthcare.

These problems often hurt women, people from minority communities, and poor people the most.

For example, facial recognition technology often makes more mistakes when identifying people with darker skin, which can lead to wrongful arrests or people being denied services.

This goes against the basic human right that says everyone should be treated equally and without discrimination (Article 7 of the Universal Declaration of Human Rights).

Freedom of Expression and Information

Social media uses AI to decide which posts to show and which to block.

  1. This helps remove harmful content, like hate speech or fake news.
  2. But sometimes, it also hides important opinions or censors people unfairly.
  3. If the rules for removing content are not clear, it can stop people from freely sharing their thoughts or accessing different views.

This goes against Article 19 of the Universal Declaration of Human Rights, which says everyone has the right to express their opinions and get information.

Due Process and Accountability

 AI is now used to make important decisions like who gets a loan, who passes a visa interview, or even legal outcomes.

  1. But these decisions are often made by complex systems that people don’t understand.
  2. If someone is treated unfairly by an AI decision, they may not know how to fight it or who to blame.

This takes away a person’s right to a fair process, which includes being able to question decisions and get justice.
Also, it’s unclear who is responsible the programmer, the company, or the machine? That makes it hard to get help when something goes wrong.

Right to Work and Economic Security

AI and robots are doing more jobs that humans used to do.

  1. This could lead to millions of people losing jobs, especially in developing countries.
  2. While some new tech jobs may be created, many people may be left behind without support or new skills.

This threatens the right to work (Article 23 of the UDHR) and increases economic inequality, especially for poor or unskilled workers.
There are also concerns about gig work (temporary or contract jobs) becoming more common, without proper protections.

Current Global Governance Landscape

 There is growing recognition that AI must be governed not just nationally but globally. However, governance efforts are currently fragmented.

Soft Law and Ethical Frameworks OECD AI Principles (2019)

First intergovernmental agreement promoting trustworthy AI based on human rights and democratic values. UNESCO Recommendation on the Ethics of AI (2021)- Covers privacy, accountability, sustainability, and fairness. G7 and G20 AI Guidelines: Emphasize innovation, safety, and rights protection, but lack enforcement mechanisms. While these frameworks are important, they are non-binding. Ethical guidelines alone cannot deter abuse or ensure remedies.

Regional and National Legal Instruments European Union

The EU AI Act (adopted in 2025) is the world’s first binding law categorizing AI by risk banning unacceptable applications (e.g., social scoring), requiring transparency for high-risk uses, and enforcing penalties for violations. Council of Europe AI Treaty (2024): The first legally binding multilateral agreement aiming to ensure AI complies with human rights, democracy, and the rule of law. China: Pursues AI governance through a centralized, state-controlled framework focused on safety, but critics cite surveillance and lack of rights safeguards. USA: Currently relies on sectoral regulations, with moves toward a federal AI Bill of Rights.

Gaps in Global Governance

There is no universal treaty governing AI akin to the Paris Agreement for climate or Geneva Conventions for war. Disparities in regulatory priorities between innovation driven states and rights-focused regimes create loopholes. Developing nations, meanwhile, often lack the legal and technical capacity to enforce rights-based AI oversight, leading to potential exploitation by global tech firms.

Striking a Balance: What Should Global AI Rules include?

To protect people’s rights while using AI, the world needs strong and fair global rules. Here’s what those rules should include-

Focus on Human Rights

AI rules must follow international human rights laws.

Even in the digital age, people’s basic rights are still important.

These include:

  1. The right to privacy
  2. Equal treatment under the law
  3. Freedom to think, speak, and meet freely
  4. Fair treatment at work
  5. The right to complain and get justice when wronged

Transparency and Understanding

People should know when AI is used and how it affects them.

  1. AI should be easy to understand and check (auditable).
  2. If AI is used in important areas like policing, education, or social welfare, governments should first study the possible impacts—just like they do for the environment.

Responsibility and Justice

If AI causes harm, someone must be held responsible.

  1. Clear rules should say who is accountable—the company that made the AI, the person who used it, or the one who collected the data.
  2. People should have the right to challenge or appeal AI decisions.
  3. There should be independent bodies to check if AI is being used fairly.

Fair and Inclusive AI

AI should benefit everyone, not just rich countries or tech experts.

To make this happen:

  1. Poorer countries (Global South) should get help to use AI in healthcare, farming, and education.
  2. AI should be trained on diverse data to avoid treating certain groups unfairly.
  3. People from marginalized communities should be included when designing AI tools.

Conclusion

The way we use AI today will decide how human rights are protected in the future. If we don’t set clear rules, AI could be misused to take away freedom, increase unfairness, or control people. But if we create strong and fair rules, AI can be a force for good helping everyone, making services better, and solving big problems. Balancing new technology with human rights is not just about law or technology it’s about doing the right thing. It will take teamwork, courage, and a promise from everyone to respect human dignity in this AI-powered world.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top