THE LEGAL CHALLENGES OF ARTIFICAL INTELLIGENCE AND MACHINE LEARNING

Published on 8th April 2025

Authored By: Aishwarya Uniyal
UPES

INTRODUCTION

Artificial Intelligence is a system that shows behavior that could be interpreted as human intelligence.

The AI systems can recommend products and detect fraud. Many of those who interact with these systems are people in business entrepreneurs, managers or students. Just as managers work with software developers today, these same managers will work with AI systems and data in future.

There is no one standard for human intelligence. There are many tasks that AI can do better than humans. Systems start to seem more intelligent than humans since over the decades, the computer is beating humans more often in game of chess.

Computer scientist did not program the machine to play checkers. It learned through its own experience. This idea is called machine learning. No human programmed the moves and countermoves. Instead, the system was designed to learn and improve on its own. By learning through data, machines could continue to grow with more data as it can adapt to new information. However, machine learning systems are still just identifying patterns.

AI IN LEGAL PRACTICE

  1. Document Review- AI powered software can be more effective in identifying relevant facts and fact and issues for particular legal cases diminishing the probability of human error.
  2. Legal Research- Certain platforms such as LexisNexis incorporates AI providing to some extent correct predictions about legal opinions assisting lawyers in forming stronger argumentative opinions.
  3. Contract Drafting- AI can draft contracts faster than humans breaking complex legal tasks and decoding minute problems which might be ignored.

AI GOVERNANCE IN INDIA

There is no specific law to AI yet, however, Digital Personal Data Protection Act, 2023 regulates AI data usage.

The government is a believer in “Responsible AI” and prioritizes fairness, transparency, and accountability. They are creating an AI policy framework that balances new concepts with regulations, making sure that AI systems meet ethical standards.

Information Technology Act, 2000 – Governs AI-related cyberattacks, data protection, and liability issues. Section 66 applies to AI-based cybercrimes, while Section 43A deals with data protection. However, AI accountability remains a grey area.

Consumer Protection Act, 2019 – Regulates AI-driven practices in e-commerce. Section 2(47) provides for AI-based misleading advertisements, products that recommend price or other fixes must conform to fair trade practices.

AI Strategy by NITI Aayog (2021) – Initiates “Responsible AI” for fairness, transparency, and accountability. The strategy aims at a balance in AI innovativeness teleological governance paving a way for AI-specific legislation in the future.

ETHICAL AND LEGAL CHALLENGES

  1. The alignment problem- Even the earliest forms of AI relied on computers, data and programming. Therefore, the challenges may also be technical in nature. Some AI challenges may lead to deep moral questions. One must consider, his or her responsibility towards the users and how they align with your organization’s values as it is a core element in working with AI systems.
  2. Copyright challenges- When you ask Chatbot like CHATGPT a question, you will get a response that sounds human. Let’s say if you ask about sunset, it will talk about a mix of yellow, orange, pink and purple. It will even use terms such as breathtaking, serenity and majestic but if you ask what really happens here, you have already seen that Companies like OPEN AI are using foundation models such as large language models. That’s because generative AI models copy what humans say and make it seem like you are talking a person. It seems like a conversation but it is just generating sentences. From a huge world cloud of probabilities and data. But they key issue is let’s say if 1,00,000 humans describe a sunset do these humans own what they have said or can a large model vacuum up their words and then use them to start new conversations? Most human authors are protected by something called copyright for the protection of their words as there will be very less incentive to visit the same platform to get the ideas again.

Though certain exceptions include such as fair use. One can use copyright protected material in a way that is fiat to the author as it balances the value of work against the public’s need to get new information. The biggest area of uncertainty is whether language models fall under fair use and exception.

In December 2023, The New York Times sued Open AI and Microsoft, claiming they infringed on its copyrights by using Times articles without permission to train AI models like ChatGPT. The Times wants both damages in an unspecified amount and an order that would stop OpenAI and Microsoft from further unauthorized use of Times content. OpenAI maintained that its use of the content is protected by the fair use doctrine. The case is still in progress, with enormous implications for the intersection of artificial intelligence and copyright law.[1] 

Companies like Open AI that are training their model n the world’s data falls under fair se. They say its like a super intelligent human. That goes in every library reads everything and use that knowledge. While Google rains their model on similar data its Gemini chatbots often provides a link to original work. Publishers such as New York Times claims that training these models is not fair use. When a model summarizes an article, it is not considered a fair use as it harms the original value and the person is less likely to purchase their articles or visit their website. [2]

Though the issue may appear as a simple legal disagreement but it poses significant threat to large language models. If most of this material is protected that means that these chatbots will lose most of its data.

It is currently being decided by regulators and court systems whether this training violates copyright protection.

The Madras High Court upheld the rejection of the AI-Human Integration Patent Claim. The court found no cogent reason for the interference of the Controller’s decision and consequently rejected the appeal.[3]

  1. Biased and Unfair opinion– AI mostly acknowledges historical data and will ignore human and societal norms. Considering only historical data ignores new innovation and dynamism in society.

For instance, Amazon developed an AI-driven hiring tool to streamline the process of hiring but discovered that the tool was discriminating against female candidates for technical roles. The AI, trained on past male-dominated hiring records, penalized resumes for containing terms like “women’s” (e.g., “women’s chess club”). Qualified female candidates were thus unfairly penalized.[4]

  1. Restrictive privacy policy- Talking about sensitive cases such as rape, sexual harassment, molestation, etc., it generally does not support it stating it is a violation of privacy policy.
  2. Decline in Job Opportunities- Considering the factors such as poverty, unemployment, not access to proper education and healthcare, new generational lawyers, paralegals and junior lawyers are at risk as it requires high skilled knowledge and strategic thinking.
  3. Clearview AI, a facial recognition company based in the United States, unlawfully collected more than 3 billion images from social media sites such as Facebook, Instagram, and Twitter without the users’ consent. It created an advanced AI image recognition database that it sold to police departments and private organizations which raised significant issues regarding privacy and surveillance. The Clearview AI case demonstrates the consequences of information being harvested with the aid of artificial intelligence, as well as the importance of having robust AI privacy laws in place.[5]

CONCLUSION

AI lawyers are no replacements for human attorney rather an effective and efficient tool for assisting the human attorneys. However, in certain arena legal technology surpasses humans in terms of efficiency and effectiveness for speedy remedy. Also, embracing lawyers for continuous learning and skill development and expand legal services. Futuristic perspective can be predicted to be a harmonious collaboration between human intelligence and artificial intelligence ensuring speedy justice.

AI is revolutionizing the legal profession with amazing efficiency and analysis. Nevertheless, its rapid utilization raises significant legal and ethical issues that should receive immediate focus. The critical challenges are copyright conflicts, liability risks, biased judgments, data privacy risks, and loss of employment.

Even as AI enhances legal research, drafting contracts, and case predictions, it can’t replace human discretion, moral thought, or lawyers’ responsibilities for ensuring justice prevails. The legal norms should evolve rapidly to control the influence of AI and foster innovation. India and U.S. are already formulating policies on AI but the international community must also have a governing plan for AI to maintain justice, transparency, and responsibility.

In the future, tougher rules for AI, ethical AI practice, and laws protecting privacy and human rights will be essential in balancing the rise of technology with legal obligations. The future of AI law cannot be aimed at replacing human thinking but making way for smooth collaboration between AI and legal specialists in ensuring the global justice systems remain equitable, just, and functional.

 

REFERENCES

[1] The New York Times v. OpenAI (2024)

[2] U.S. Copyright Office Decision on AI Works

[3] livelaw.in

[4] Amazon’s Hiring AI Bias Controversy (2018)

[5] The New York Times (2020)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top