Published on: 29th October 2025
Authored by: Sampada Neupane
National Law College, Tribhuvan University
ABSTRACT
The rapid advancement of Artificial Intelligence (AI), particularly generative AI, has transformed the legal sector by enhancing efficiency, accuracy, and accessibility. However, this technological evolution presents pressing ethical and human rights concerns. This article critically examines the implications of AI integration in the legal domain, focusing on issues of bias, accountability, transparency, and privacy. It explores how biased training data can perpetuate discrimination, violating principles of equality and non-discrimination under international human rights law.
Introduction
Over the past year, AI has grown significantly. Undoubtedly, 2023 will go down in history as the year of artificial intelligence.1 With its rapid development, AI has also been significantly increasing in the legal sector. Generative AI has hugely accelerated the efficiency and effectiveness of the legal industry. It is due to AI’s large language modules (LLMs) that understand vast amounts of text and also generate new content based on their derived interpretation of human language. 2 AI systems often can produce surprisingly good results on certain complex tasks that, when done by humans, require cognition. Notably, however, these AI systems do so by using computational mechanisms that do not match human thinking.3 Even with its rapid development, AI does not quite match up with what we call true intelligence.4 AI
Blake Schmidt, ChatGPT “Arms Race” Adds $4.6 Billion to Nvidia Founder’s
Fortune, BLOOMBERG.COM (2023), https://www.bloomberg.com/news/articles/2023-01-27/chatgpt-arms-race-adds 4-6-billion-to-nvidia-founder-s-fortune (last visited June 8, 2025).
2 NetDocuments, The Rise of AI in Legal: Revolutionizing the Legal
Landscape, NETDOCUMENTS.COM (2024), https://www.netdocuments.com/blog/the-rise-of-ai-in-legal revolutionizing-the-legal
landscape#:~:text=Generative%20AI%20has%20enormous%20potential,internal%20business%20processes%2C%2 0and%20more. (last visited June 8, 2025).
3 Harry Suden, Artificial Intelligence and Law: An Overview, 35 GEORGIA STATE UNIVERSITY LAW REVIEW 19, 5 (2019)
4 Tomas Tulka, Conscious AI: Can artificial intelligence think about itself? |
Medium, MEDIUM (2024), https://ttulka.medium.com/conscious-ai-
cannot think for itself, that’s why scientists came up with the term “artificial general intelligence” (AGI).
The rapid advancement of artificial intelligence (AI) technologies has ushered in a transformative era across various sectors5, including healthcare, finance, transportation, and notably, the legal industry. As AI becomes increasingly embedded in legal practices, it holds the potential to revolutionize traditional legal processes, enhancing efficiency, accuracy, and accessibility. However, the integration of AI into the legal sector also raises critical ethical and regulatory challenges that must be addressed to ensure its responsible use.
The legal profession has traditionally relied on human expertise for tasks such as legal research, contract analysis, case management, and compliance monitoring. With the advent of AI, these practices are undergoing significant transformation. Early AI applications in law focused on automating repetitive tasks, thus reducing the time and effort required from legal professionals. 6Today, advancements in machine learning, natural language processing, and data analytics have expanded AI’s capabilities, enabling it to perform complex tasks such as predictive analytics for case outcomes, automated contract review, and even preliminary legal advice through AI-driven Chatbots.7
The growing importance of AI in the legal sector is driven by several factors. There is an increasing demand for cost-efficient legal services, with clients seeking affordable solutions that AI can provide by automating labor-intensive processes. Additionally, the volume of data involved in legal work has grown exponentially, necessitating AI tools capable of processing and
6e09ede24717#:~:text=In%20short%2C%20today’s%20AI%20can,have%20anything%20like%20that%20yet. (last visited June 9, 2025).
5 Mukeshkumar MS, Artificial Intelligence (AI) has undergone a remarkable evolution in recent years, propelling us into an era where machines are not just learning but making informed decisions. The rapid development of AI has been driven by a convergence of factors, ushering in a transformative journey across
variou, LINKEDIN.COM (2023), https://www.linkedin.com/pulse/rapid-development-artificial-intelligence-journey mukeshkumar-ms-h4fzc (last visited June 9, 2025).
6 Kosta Mitrofanskiy, Artificial Intelligence (AI) in the Law
Industry, INTELLISOFT (2024), https://intellisoft.io/artificial-intelligence-ai-in-the-law-industry-key-trends-examples usages/ (last visited June 9, 2025).
7Erin Brereton, How AI Is Transforming the Legal Profession, Am. Bar Assn. (May 2, 2022), https://www.americanbar.org/groups/law_practice/publications/law_practice_magazine/2022/may-june/how-ai-is transforming-the-legal-profession/. (last visited June 10, 2025)
analyzing vast amounts of information quickly and accurately. AI also enhances decision-making capabilities by providing insights into likely case outcomes and identifying potential risks in contracts, thus improving the precision and effectiveness of legal strategies.
However, the integration of AI into legal practices is not without its challenges. Ethical concerns such as bias in AI algorithms, transparency in AI decision-making, and accountability of AI systems are paramount.8 These issues necessitate the development of robust ethical guidelines and regulatory frameworks to ensure that AI is deployed responsibly. The core ethical principles of fairness, transparency, privacy, and accountability form the foundation of these frameworks.
AI regulation in law varies globally. The EU’s AI Act adopts a risk-based framework with strict controls on high-risk systems, while the U.S. follows a decentralized, sector-specific approach. The UK promotes innovation-friendly regulation, and China enforces top-down ethical oversight.
However, challenges persist, including rapid technological change, system opacity, data privacy concerns, and regulatory fragmentation. Resistance within the legal profession also hinders implementation.
Despite these hurdles, AI offers opportunities to enhance fairness, transparency, privacy, and global regulatory harmonization. With targeted legal education and innovation, AI can help build a more just and efficient legal system.
Bias in AI systems
Generative AI systems learn patterns and behaviors from the data on which they are trained. If the training data contains biases whether explicit or implicit the AI system may replicate or amplify these biases in its outputs. For example, an AI model trained on biased hiring data might generate discriminatory job descriptions or recommendations. To address this, developers must prioritize the use of diverse, representative, and balanced datasets during the training process. Additionally, ongoing auditing and testing of AI systems for bias are essential to identify and
8 Office & Admin, INTEL – Principles of Artificial Intelligence Ethics for the Intelligence Community, INTELLIGENCE.GOV (2024), https://www.intelligence.gov/principles-of-artificial-intelligence-ethics-for theintelligencecommunity#:~:text=Respect%20the%20Law%20and%20Act,civil%20rights%2C%20and%20civil% 20liberties. (last visited June 9, 2025).
mitigate discriminatory outcomes. Techniques such as fairness-aware algorithms and bias correction methods can also help ensure that AI systems operate equitably.
Ethical and Moral Considerations in Generative AI
Generative AI technologies, while offering transformative potential across various domains, also present profound ethical and moral challenges. These concerns are particularly salient in the context of human rights, as the misuse or unethical application of AI systems can lead to significant violations of fundamental rights.
Bias and Discrimination
One of the most critical ethical issues in generative AI is the potential for bias and discrimination. AI systems are typically trained on vast datasets that often mirror existing societal biases9. If these biases are not identified and mitigated, generative AI systems risk perpetuating or even amplifying discriminatory practices against certain groups. This raises significant ethical concerns, as it challenges the principles of fairness, equity, and justice that underpin human rights frameworks.
Gender Bias
Generative AI systems may produce content that reinforces traditional gender stereotypes. For instance, AI-generated text or images might disproportionately depict women in domestic or caregiving roles, while men are portrayed in positions of authority or technical expertise10. Such representations not only reflect but also perpetuate outdated gender norms, contributing to the marginalization of women in various spheres of life.
Racial Bias
AI systems can also exhibit racial biases, often reflecting historical and systemic inequalities present in the training data11. For example, generative AI might associate certain ethnic or racial groups with negative stereotypes, such as criminality or poverty, while underrepresenting or
9 Margot E. Kaminski, Regulating the Risks of AI, 103 B.U. L. Rev. 547 (2023)
10 U.S. Gov’t Accountability Off., GAO-21-519, Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (2021)
11 Id
misrepresenting their contributions to society. This can lead to harmful generalizations and reinforce prejudiced attitudes.
Implications for Human Rights:
Non-Discrimination
The perpetuation of bias in generative AI systems directly undermines the principle of non discrimination, a fundamental tenet of international human rights law. Article 2 of the Universal Declaration of Human Rights (UDHR)12 and Article 26 of the International Covenant on Civil and Political Rights (ICCPR) 13explicitly prohibit discrimination on the basis of race, gender, ethnicity, and other protected characteristics. When AI systems replicate or amplify biases, they contribute to systemic discrimination, violating these principles.
Equal Treatment:
Generative AI systems that produce biased content can lead to unequal treatment of individuals from marginalized groups. For example, biased AI-generated hiring recommendations or loan approval systems may disproportionately disadvantage certain demographics, exacerbating existing social and economic inequalities. This not only violates the right to equal treatment under the law but also perpetuates cycles of marginalization and exclusion.
In conclusion, addressing bias and discrimination in generative AI is not merely a technical challenge but a moral imperative. Ensuring that AI systems are designed and deployed in ways that uphold human rights principles requires a concerted effort to identify, mitigate, and eliminate biases in training data and algorithms. Failure to do so risks entrenching systemic inequalities and undermining the ethical foundations of AI development.
Accountability and Transparency
A core ethical concern in generative AI lies in the challenges of ensuring accountability and transparency. As AI systems generate content and influence decisions, attributing responsibility for potential harm, such as the spread of misinformation, privacy violations, or reputational
12 G.A. Res. 217A (III), U.N. Doc. A/810 (Dec. 10, 1948).
13 International Covenant on Civil and Political Rights, adopted Dec. 16, 1966, S. Exec. Doc. E, 95-2 (1978), 999 U.N.T.S. 171.
damage becomes increasingly difficult. The opacity of algorithmic processes further complicates efforts to establish clear lines of responsibility and erodes public trust.
Deepfakes, a notable example, involve AI-generated synthetic media that can convincingly mimic real individuals. These are often deployed to manipulate public perception or harm reputations, with limited mechanisms to trace their origin or hold creators accountable. 14Likewise, AI-generated misinformation can be disseminated at scale through automated systems, posing risks to democratic processes, public health, and social stability. The decentralized and anonymous nature of such outputs presents significant barriers to regulatory and legal accountability.15
Implications for Human Rights:
Right to Privacy:
The misuse of generative AI, particularly in the creation of deepfakes or other invasive content, can severely violate individuals’ right to privacy and dignity. Article 12 of the Universal Declaration of Human Rights (UDHR) and Article 17 of the International Covenant on Civil and Political Rights (ICCPR) explicitly protect individuals from arbitrary interference with their privacy. When AI systems are used to create non-consensual or harmful content, they infringe upon these rights, leaving victims with limited recourse due to the anonymity and complexity of AI technologies.
Freedom of Expression:
The spread of AI-generated misinformation can distort public discourse and undermine the right to freedom of expression, as enshrined in Article 19 of the UDHR and the ICCPR. While freedom of expression is a fundamental human right, it is contingent on the availability of accurate and reliable information. When generative AI is used to flood public spaces with false or misleading content, it erodes trust in information sources and stifles meaningful dialogue, ultimately harming democratic processes and societal cohesion.
Addressing Accountability and Transparency
14 Anil Kapoor v. John Doe, W.P. (C) No. 7212/2023 (Del. HC 2023).
15 Evelyn Douek, “Deepfakes and the Legal Framework for Online Manipulation,” 61 Harv. Int’l L.J. 102 (2020)
To mitigate these ethical challenges, it is essential to establish clear frameworks for accountability and transparency in the development and deployment of generative AI systems. 16This includes:
Traceability: Implementing mechanisms to trace the origin of AI-generated content, such as watermarking or digital signatures, to ensure accountability for misuse.
Regulation: Developing legal and regulatory frameworks that hold developers, deployers, and users of AI systems accountable for harmful outcomes.17
Ethical Design: Encouraging the adoption of ethical design principles, such as explainability and fairness, to ensure that AI systems operate transparently and align with human rights standards.
In conclusion, the ethical challenges of accountability and transparency in generative AI are deeply intertwined with human rights considerations. Addressing these issues requires a multidisciplinary approach that combines technical solutions, regulatory measures, and ethical guidelines to ensure that AI technologies are used responsibly and do not undermine fundamental rights.
16 U.N. High Comm’r for Hum. Rts., The Right to Privacy in the Digital Age, U.N. Doc. A/HRC/51/17 (2022). 17 United States v. Meta Platforms, Inc., No. 3:23-cv-01962 (N.D. Cal. 2023).




