Regulation of Artificial Intelligence in India:- Legal Challenges and the Road Ahead

Published on 11th August 2025

Authored By: Shayaree Sen
Jogesh Chandra Chaudhuri Law College

ABSTRACT

Artificial Intelligence is shaping the future—but India’s legal system isn’t ready for it yet. While AI tools are being used in policing, public welfare, and even courts, there’s no clear law to govern how they work or protect people from their risks. This article breaks down global approaches, India’s current legal gaps, and real examples of AI in action. It highlights the urgent need for accountability, fairness, and transparency in AI systems. Finally, it offers practical suggestions to help India build a strong legal framework that supports innovation without compromising on rights and justice.

KEYWORDS: Artificial intelligence, legal regulation, Indian technology law, algorithmic bias, facial recognition, right to privacy, SUPACE, ethical governance, data privacy

INTRODUCTION

Presently, India lacks a dedicated regulation for AI, but instead, it has established a series of initiatives and guidelines aimed at the responsible development and deployment of AI technologies. But before delving into that discussion, let’s first understand what AI is. AI or artificial intelligence, once a distant concept of science fiction, has now become an integral part of modern technological advancement. It is a technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy[1]. AI systems range from narrow AI, designed for specific tasks like language translation or image recognition to more complex models that exhibit decision-making capabilities akin to human reasoning.

AI operates through a combination of algorithms, vast datasets, and computational power. Machine learning, a subset of AI, enables systems to learn and improve from experience without being explicitly programmed. Deep learning, a further subset, uses neural networks to simulate the human brain’s functioning, powering tools like voice assistants, recommendation engines, autonomous vehicles, and facial recognition systems.

In today’s world, AI is no longer a luxury or novelty—it is embedded in our everyday routines. From navigating with GPS to personalized content on streaming platforms, AI silently and efficiently enhances convenience and efficiency. In healthcare, AI helps in early diagnosis and predictive treatments. In finance, it aids fraud detection and algorithmic trading. Even in the legal field, AI tools assist in contract analysis and legal research.

However, alongside these benefits arise serious concerns. AI systems can be opaque, difficult to regulate, and prone to biases inherited from data or human input. There are valid fears about job displacement, data misuse, algorithmic discrimination, and a lack of accountability when things go wrong. As AI becomes more autonomous, questions of responsibility, transparency, and ethics become urgent.[2]

Given this duality, it becomes essential to explore how laws and regulations can keep pace with such rapid innovation. In this article, we will examine the existing legal frameworks around AI, both globally and within India, highlight the key legal challenges posed by AI technologies, and suggest how India can move toward a robust and forward-thinking regulatory approach.

GLOBAL LEGAL FRAMEWORK

Since AI is developing so quickly, nations are regulating it in a variety of ways.  Some jurisdictions have created comprehensive laws, while others rely on ethical frameworks or sector-specific standards. An outline of the main international players’ approaches on AI regulation is provided below:

  1. European Union- The EU AI Act:-

The EU AI Act, proposed in 2021, is the world’s first comprehensive legislation on AI and follows a risk-based framework. AI systems are classified into four risk levels:

  • Unacceptable Risk:- AI systems that clearly endanger people’s safety, means of subsistence, or fundamental rights are prohibited. This covers activities like social scoring, harmful manipulation, and specific forms of biometric surveillance.
  • High Risk:- AI used in critical areas like recruitment, credit scoring, law enforcement, and medical devices fall into this category. These must meet strict requirements such as transparency, data governance, human oversight, and documentation.
  • Limited Risk:- Certain transparency requirements apply to AI systems with low risk. Users should be conscious that they are engaging with an AI system, for example.  Applications like chatbots and deepfakes are covered at this level.
  • Minimal Risk:- Most AI systems fall here (e.g., spam filters, AI in video games). These face no specific obligations, but providers are encouraged to follow voluntary codes of conduct.

The Act also includes penalties for non-compliance and promotes innovation through regulatory sandboxes. Once applicable and enforceable, it will set a global benchmark for AI regulation.[3]

  1. United States:-

The United States lacks a comprehensive AI law and instead follows a sector-specific approach, where agencies like the Food and Drug Administration (FDA), Federal Trade Commission (FTC), and National Highway Traffic Safety Administration (NHTSA) regulate AI within their domains. The National AI Initiative Act of 2020 promotes AI research, development, and inter-agency collaboration but is not regulatory in nature. In 2022, the White House introduced the Blueprint for an AI Bill of Rights, outlining non-binding principles such as transparency, data privacy, and algorithmic fairness. While these frameworks guide ethical AI use, they remain voluntary and advisory. States like California have also begun exploring independent AI regulations, reflecting a fragmented landscape. Overall, the U.S. approach favours innovation and self-regulation, with growing attention to ethical concerns but no unified legal framework yet.[4]

  1. China:-

China regulates AI under a state-controlled model, prioritizing national security and political control. The Regulation on Deep Synthesis Internet Information Services (2023) mandates labelling AI-generated content and prohibits fake news. China emphasizes algorithmic transparency, requiring companies to register and disclose their AI algorithms with the government. Social credit systems and surveillance tools are also regulated, though criticized for lack of privacy safeguards.[5]

INDIA’S LEGAL AND REGULATORY POSITION

India, despite being one of the fastest-growing digital economies and a hub for technological innovation, currently lacks a dedicated legal framework to regulate Artificial Intelligence. There is no standalone legislation or specific regulatory body governing the development, deployment, or ethical use of AI systems. Instead, the legal landscape is largely shaped by general laws, policy documents, and governmental initiatives, most of which only indirectly address AI-related concerns.[6]

At present, the Information Technology Act, 2000[7] serves as the backbone of India’s digital legal regime. While it governs cybercrimes, electronic records, and data breaches, it does not explicitly cover issues such as algorithmic decision-making, AI accountability, or automated bias. Similarly, the recently enacted DPDP Act, 2023[8] is focused on personal data processing and user consent. While it can apply to AI systems that process personal information, it lacks provisions specific to AI challenges such as explainability, autonomy, and machine learning opacity.[9]

Another relevant statute is the Consumer Protection Act, 2019, which could potentially be invoked if AI-driven products or services cause harm due to defects or misleading performance. However, these existing laws offer piecemeal protection and fall short of addressing the unique legal, ethical, and technological risks posed by advanced AI systems.[10]

On the policy front, the Indian government has taken several progressive steps to foster AI development while hinting at future regulation. In 2018, NITI Aayog released a landmark report titled “National Strategy for Artificial Intelligence”, outlining India’s vision to become a global leader in AI. It emphasized five core sectors: healthcare, agriculture, education, smart mobility, and smart cities. Follow-up papers, including “Responsible AI for All”, addressed ethical considerations such as bias, transparency, and human oversight but remained non-binding in nature.

The Ministry of Electronics and Information Technology (MeitY) has also launched initiatives like the National AI Portal, AI Research Centres, and the IndiaAI program. These aim to promote indigenous AI innovation, particularly in public service delivery. However, while MeitY has acknowledged the need for responsible AI, no binding regulatory framework or draft bill has yet emerged.[11]

In summary, India’s AI legal framework remains fragmented and policy-driven, with a noticeable gap between technological growth and legal preparedness. A formal, enforceable, and forward-looking legal structure is still a work in progress.

LEGAL CHALLENGES IN REGULATING AI IN INDIA

While Artificial Intelligence holds immense potential, its unregulated growth in India brings serious legal and ethical challenges. These challenges span across issues of accountability, bias, transparency, privacy, and constitutional rights. Unlike other jurisdictions that have begun to develop structured responses, India still lacks a dedicated legal framework, which leaves multiple grey areas in law. Below are the major legal challenges India faces in regulating AI:

  1. Accountability and legal liability:-

A central legal dilemma with AI is assigning liability when something goes wrong. In India, AI is increasingly used in governance and service delivery—such as AI chatbots, biometric-based welfare systems, and automated decision-making. For instance, errors in Aadhaar-linked benefit systems have led to exclusion from rations or pensions, but who is responsible when an AI-driven process fails—the programmer, the implementing agency, or the state?[12]

In the absence of specific legislation, victims have no clear path to claim compensation or assign fault. Traditional tort law does not adequately address harms caused by autonomous decision-making systems.

  1. Bias and discrimination in automated systems:-

AI systems trained on biased data can reinforce social discrimination. In India, this is especially dangerous given deep-rooted issues of caste, gender, and linguistic diversity. For example, many private companies and startups now use automated hiring tools to screen resumes. If these systems are trained primarily on resumes from elite institutions or urban areas, they may inadvertently filter out candidates from marginalized backgrounds, reinforcing inequality.[13]

In public systems, the risk is even greater. Facial recognition technology (FRT) is being deployed by law enforcement in Delhi and other states. These systems are often less accurate for women and darker-skinned individuals. The lack of transparency in their use and absence of oversight mechanisms poses a threat to Article 14 (Right to Equality) of the Indian Constitution.[14]

  1. Transparency and explainability:-

Many AI systems operate as “black boxes”—they generate results without explaining how they arrived at them. This lack of explainability undermines the principles of natural justice, especially the right to a reasoned decision. The Supreme Court’s own AI initiative, SUPACE, though designed to assist judges in legal research, raises questions: if AI tools begin influencing judicial thought, should litigants have a right to challenge the outputs?[15]

Moreover, when AI systems are used in welfare screening or automated policing, citizens may never know why a decision was made about them—violating both procedural fairness and due process rights under Article 21.[16]

  1. Privacy and surveillance concerns:-

The use of AI in surveillance—especially through facial recognition and data profiling—poses a major threat to the right to privacy, a fundamental right affirmed by the Supreme Court in Justice K.S. Puttaswamy v. Union of India (2017)[17]. Several reports have highlighted the use of AI-powered facial recognition tools by Delhi Police during protests and public gatherings, with no legal safeguards or consent mechanisms in place.

In the absence of a data protection authority or AI oversight board, there is no redressal if someone’s biometric data is misused or stored unlawfully by an AI system.[18]

  1. Intellectual property and AI generated content:-

India’s Copyright Act, 1957[19] assumes that only human authors can create copyrighted works. However, with the rise of generative AI tools—used to create art, music, or even legal documents—the question arises: who owns this content? While there is no legal clarity yet, the increasing commercial use of such content demands urgent policy attention.[20]

  1. Absence of judicial interpretation:-

One of the major gaps in India’s AI legal environment is the lack of case law. While courts like the Delhi High Court have dealt with tech-related privacy and algorithmic transparency in Aadhaar-linked cases, no Indian court has directly interpreted AI-related liability, bias, or transparency issues. This judicial silence makes it difficult to evolve doctrinal clarity around rights and remedies in AI governance.[21]

India stands at a crossroads where AI is already influencing governance, justice, and markets—but without the legal safeguards to ensure fairness, accountability, or constitutional compliance. These real-world examples show that regulation is not just desirable, but urgently necessary. Without it, both citizens and institutions remain vulnerable to algorithmic harm without recourse.

THE ROAD AHEAD

To address the legal vacuum surrounding Artificial Intelligence, India must move beyond policy discussions and develop a comprehensive, enforceable legal framework. The goal should be to promote innovation while ensuring AI systems operate in a manner that is ethical, transparent, and consistent with constitutional values.

  1. Enact a comprehensive AI regulation:-

India needs a dedicated AI law or framework, possibly along the lines of the European Union’s risk-based model. Such a law should categorize AI systems based on the level of risk they pose and impose corresponding regulatory obligations. High-risk systems—like those used in surveillance, policing, or welfare decisions—must be subject to strict oversight, human review, and accountability mechanisms.

  1. Establish an AI regulatory authority:-

A central regulatory body—possibly under MeitY or as an independent commission—should be established to audit algorithms, certify high-risk AI applications, and investigate harms caused by AI misuse. This authority should also collaborate with sector-specific regulators like SEBI, IRDAI, and TRAI.[22]

  1. Ensure Algorithmic Transparency and Explainability:-

Developers of AI systems, especially in sensitive areas, should be mandated to ensure algorithmic transparency and provide explanations for AI-driven decisions. Legal standards for explainability will help courts and citizens challenge unfair or biased outcomes.

  1. Strengthen Accountability and Redressal Mechanisms:-

Clear liability rules must be laid down for AI harms. Victims of AI-related discrimination or loss should have access to effective grievance redressal, compensation, and judicial remedies. The law must address both public and private sector uses of AI.

  1. Promote Ethical and Inclusive AI Development:-

India’s diverse socio-economic fabric demands inclusive data sets, anti-discrimination audits, and ethical design principles. Government procurement of AI tools should be conditional on fairness and bias testing.

  1. Encourage Interdisciplinary Collaboration:-

Lawyers, technologists, ethicists, and civil society must work together to shape AI governance. Capacity building through academic research, legal education, and public discourse is crucial to ensuring meaningful regulation.[23]

CONCLUSION

Artificial Intelligence is no longer a futuristic concept—it is already influencing governance, industry, and daily life in India. While its potential is immense, the absence of a dedicated legal framework creates serious risks related to accountability, bias, privacy, and constitutional rights. Existing laws like the IT Act or DPDP Act offer limited safeguards and fail to address AI-specific harms. Real-world examples—such as facial recognition in policing or AI in welfare delivery—highlight the urgent need for regulation. Without clear legal standards, both public and private uses of AI may infringe on individual rights without remedy.

India now stands at a critical juncture. By adopting a balanced, rights-based regulatory approach, backed by institutional oversight and ethical design principles, it can shape AI in a way that promotes innovation without sacrificing justice or fairness. The time to act is now—before legal uncertainty turns into societal harm.

 

REFERENCES

[1] Cole Stryker and Ede Kavlakoglu, “What Is Artificial Intelligence” (IBM, August 9, 2024) https://www.ibm.com/think/topics/artificial-intelligence, accessed 19 June 2025

[2] Glover E, “What Is Artificial Intelligence (AI)?” (Built In, October 21, 2024) https://builtin.com/artificial-intelligence accessed June 19, 2025

[3] Nguyen N, “What Is the EU AI Act? A Comprehensive Overview” (Feedback fruits, February 12, 2025) https://feedbackfruits.com/blog/from-regulation-to-innovation-what-the-eu-ai-act-means-for-edtech accessed June 19, 2025

[4] White & Case LLP International Law Firm, Global Law , “AI Watch: Global Regulatory Tracker – United States” (White & Case LLP, March 31, 2025) https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states accessed June 19, 2025

[5] Savio, “AI Regulations Around the World” Spiceworks (April 30, 2024) https://www.spiceworks.com/tech/artificial-intelligence/articles/ai-regulations-around-the-world/ accessed June 19, 2025

[6] Dey A and Cyril  M, “Regulation of AI and Large Language Models in India” (India Briefing News, March 27, 2024) https://www.india-briefing.com/news/india-regulation-of-ai-and-large-language-models-31680.html/ accessed June 19, 2025

[7] The Information Technology Act, 2000

[8] The Digital Personal Data Protection Act, 2023

[9] Dey and Cyril (n 6)

[10] Gupta D, “Navigating the AI Horizon- Safeguarding Consumer Rights in the Digital Era” (IndiaAI, July 29, 2024) https://indiaai.gov.in/article/navigating-the-ai-horizon-safeguarding-consumer-rights-in-the-digital-era accessed June 20, 2025

[11] Mali PhDAdvP, “Addressing the Challenges Posed by AI in India – Student Law Journal (SLJ)” (Student Law Journal (SLJ) | Dharmashastra National Law University – SLJ DNLU, June 25, 2024) https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/ accessed June 20, 2025

[12] Shagun, “Many Go without Ration, Pension in Villages as Aadhaar E-KYC & Biometrics Errors Create Unique Problems” Down To Earth (December 12, 2024) https://www.downtoearth.org.in/governance/many-go-without-ration-pension-in-villages-as-aadhaar-e-kyc-biometrics-errors-create-unique-problems accessed June 20, 2025

[13] Malik A, “AI Bias In Recruitment: Ethical Implications And Transparency” (Forbes, September 25, 2023) https://www.forbes.com/councils/forbestechcouncil/2023/09/25/ai-bias-in-recruitment-ethical-implications-and-transparency/ accessed June 20, 2025

[14] Ulmer A and Siddiqui Z, “India’s Use of Facial Recognition Tech during Protests Causes Stir” Reuters (February 17, 2020) https://www.reuters.com/article/world/indias-use-of-facial-recognition-tech-during-protests-causes-stir-idUSKBN20B0ZP/ accessed June 20, 2025

[15] Kumari S, “The Role of Artificial Intelligence in Modern Courts: A Tool of Transformation or a Threat to Justice?” (SSRN , June 16, 2025) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5294742 accessed June 20, 2025

[16] Jana, “The AI Black Box: What We’re Still Getting Wrong about Trusting Machine Learning Models” (Hyperight, March 3, 2025) https://hyperight.com/ai-black-box-what-were-still-getting-wrong-about-trusting-machine-learning-models/ accessed June 20, 2025

[17] K S Puttaswamy (Retd) v Union of India (2017) 10 SCC 1

[18] Ulmer and Siddiqui (n 14)

[19] The Copyright Act, 1957

[20] Chhabra H and Pandey KG, “Balancing Indian Copyright Law with AI-Generated Content: The ‘Significant Human Input’ Approach” (IJLT, February 26, 2024) https://www.ijlt.in/post/balancing-indian-copyright-law-with-ai-generated-content-the-significant-human-input-approach accessed June 20, 2025

[21] Stuward JJ, “THE LEGAL CHALLENGES OF ARTIFICIAL INTELLIGENCE IN INDIA » Lawful Legal” (Lawful Legal, June 12, 2025) https://lawfullegal.in/the-legal-challenges-of-artificial-intelligence-in-india/?amp=1 accessed June 20, 2025

[22] Team NICA, “Approach to Regulating AI in India” (NEXT IAS – Made Easy Learnings Pvt. Ltd, April 16, 2025) https://www.nextias.com/ca/editorial-analysis/16-04-2025/regulating-ai-india-approach?utm_source=chatgpt.com accessed June 20, 2025

[23] Singhania K, “Regulating AI In India: Challenges, Initiatives, And Path To Future Success” (India, May 7, 2025) https://www.mondaq.com/india/new-technology/1621322/regulating-ai-in-india-challenges-initiatives-and-path-to-future-success accessed June 20, 2025

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top