Regulation of Artificial Intelligence in India: Legal Challenges and the Road Ahead

Published on: 1st December, 2025

Authored by: Roona Shukla
Shri Jai Narain Misra PG College, Lucknow

Abstract

India’s artificial intelligence regulatory framework faces critical challenges due to rapid technological advancement and insufficient legal infrastructure. The country’s legal system currently relies on the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023, but these statutes lack adequate provisions to address AI-specific challenges such as algorithmic accountability, data privacy, and autonomous system liability. A comparative examination of India’s approach against the European Union’s risk-based AI Act, the United States’ sector-specific strategy, and the United Kingdom’s balanced model reveals substantial regulatory gaps. Key legal challenges include determining intellectual property rights for AI-generated content, ensuring robust data protection, establishing clear liability frameworks for AI-driven decisions, and eliminating algorithmic bias. The absence of a comprehensive harm-based risk classification system particularly hampers effective risk management in high-stakes sectors such as healthcare, finance, and law enforcement. This article proposes policy recommendations including a scientific, iterative approach to risk assessment, enhanced industry collaboration, and establishment of a national AI Safety Institute. Promoting self-regulation, strengthening government oversight, and engaging multidisciplinary experts are essential for adaptive and responsible AI governance. As India navigates this transformational period, balancing innovation with ethical and legal safeguards will enable the country to harness artificial intelligence’s potential while protecting human rights and societal interests.

I. Introduction

The Information Technology Act, 2000—India’s principal cyber law—was not designed to accommodate artificial intelligence’s rapid technological evolution.[1] While the Act addresses digital commerce, cybercrime, and fundamental data protection, it does not adequately regulate AI-specific concerns. Despite subsequent amendments, crucial issues such as algorithmic accountability and data privacy in the AI context remain unaddressed. Due to its autonomous nature, AI systems can deliver unfair or harmful outcomes if left unregulated, creating a substantial legal vacuum.

Currently, India adopts a reactive approach to AI-related legal challenges. As artificial intelligence technology advances, the legal framework consistently lags behind. Without robust structures to correct algorithmic bias and protect data privacy, AI systems may exacerbate social inequality or jeopardize personal information.[2] This regulatory gap necessitates comprehensive legal reform that anticipates technological developments rather than merely responding to them.

II. Comparative Analysis of Global Legal Frameworks

International jurisdictions have developed varied regulatory responses to challenges posed by emerging technologies like artificial intelligence. India can examine the European Union’s risk-based framework, the United States’ sector-specific standards, and the United Kingdom’s balanced approach to develop a cohesive legislative strategy for safe AI deployment.

The European Union has established comprehensive data protection through the General Data Protection Regulation (GDPR), which emphasizes accountability and transparency. Building on this foundation, the European Commission proposed the AI Act in April 2021, which establishes risk-based thresholds for AI systems according to their potential harms. This framework categorizes AI applications into unacceptable risk, high risk, limited risk, and minimal risk categories, with corresponding regulatory requirements.[3]

The United States employs a sector-specific approach that focuses on particular industries and ensures compliance with existing regulatory standards applicable to those sectors. Rather than implementing unified AI legislation, the U.S. relies on various legal frameworks and industry-specific guidelines. This approach allows for flexibility and specialized expertise but may create inconsistencies across different domains.

The United Kingdom balances innovation imperatives in AI development with robust privacy protection. The Data Protection Act 2018 incorporates GDPR principles while encouraging ethical considerations in AI design and deployment. The UK’s approach emphasizes proportionality and contextual regulation rather than blanket restrictions.[4]

Comparative analysis reveals that India has conducted limited research examining the intersection of algorithmic accountability and data privacy, whereas the EU AI Act and other global frameworks treat these as interconnected concerns. AI transparency issues fundamentally undermine both privacy and accountability, particularly in critical sectors such as digital governance and law enforcement. Current Indian scholarship on data privacy has not adequately examined how transparency and accountability function within AI systems.[5] While the EU and US have developed integrated frameworks, India lacks a systematic mechanism to adapt its laws to address these interrelated challenges simultaneously.

III. India’s AI Governance Framework

India’s approach to AI governance does not consist of comprehensive AI-specific legislation but rather comprises existing laws and evolving policy initiatives. Although no single statute governs artificial intelligence, current legislation and emerging regulations are being modified and applied to various aspects of AI research and deployment.

Existing Legal Framework:
The Information Technology Act, 2000 (IT Act) governs cybersecurity, data protection, and intermediary accountability, including liability for AI misuses such as deepfakes. The Digital Personal Data Protection Act, 2023 (DPDP Act) regulates how AI systems process personal data but does not comprehensively address all AI-related concerns. The IT Rules, 2021 require intermediaries to remove illegal content, including AI-generated content, within specified timeframes.[6]

Policy Initiatives:
In 2018, India developed the National Strategy for Artificial Intelligence, a detailed policy framework that has not yet been officially enacted or implemented. The India AI Mission focuses on building robust AI models, collecting high-quality datasets, developing AI skills, and ensuring AI safety and trustworthiness. These initiatives aim to strengthen the country’s AI ecosystem.

An AI Governance Panel comprising expert stakeholders is currently developing recommendations for AI regulation in India. However, uncertainty remains regarding whether these recommendations will translate into binding government policies.[7]

IV. Problems with AI Risk Assessment

India currently lacks a comprehensive system to classify AI risks based on potential harm. Existing risk assessment methodologies remain unreliable and incomplete. In 2021, NITI Aayog (National Institution for Transforming India) discussed risks and ethical considerations for AI development but did not clearly articulate these risks before formulating regulatory guidelines.

A study by the Telecom Regulatory Authority of India (TRAI) identified several critical AI-related problems in the Indian context, including poor data quality, data biases, data security vulnerabilities, data privacy concerns, inaccurate or biased algorithms, and unethical AI applications. The Telecom Engineering Center drafted a preliminary AI risk assessment methodology, but this framework focuses narrowly on fairness while neglecting other essential considerations.[8]

V. Key Legal Challenges

Several fundamental legal challenges must be addressed to establish effective AI regulation in India:

Intellectual Property Rights: Determining copyright ownership for AI-generated content presents one of the most contentious legal questions. The crucial issue is whether copyright belongs to the AI’s creator, the user, or potentially the AI system itself. Current intellectual property laws do not adequately address these novel questions, creating significant legal uncertainty.

Privacy and Data Protection: AI systems frequently require vast amounts of data, raising serious concerns about privacy, data protection, and informed consent. Organizations deploying AI technologies must ensure compliance with data protection regulations. However, the intersection of AI processing and privacy rights remains inadequately defined in Indian law.

Liability and Accountability: Determining responsibility when an AI system makes a decision that causes harm presents complex legal challenges. This question becomes particularly difficult when the AI’s decision-making process lacks transparency. Courts struggle to establish liability in such situations, as traditional legal concepts of fault and causation may not readily apply to autonomous systems.

Transparency and Explainability: Legal frameworks increasingly require AI systems to provide transparent explanations for their conclusions and operations, especially in critical domains such as healthcare and criminal justice. This requirement poses significant challenges because AI systems often function as “black boxes” whose internal processes resist easy explanation.

Bias and Discrimination: AI systems can perpetuate and amplify biases present in their training data, raising legal concerns about fairness and discrimination. These issues become particularly acute in contexts such as employment decisions, credit scoring, and insurance underwriting, where algorithmic bias can violate anti-discrimination laws.[9]

These risk factors require thorough examination to establish an appropriate AI risk classification system for India. There is general consensus that India should regulate “high-risk use cases,” though this term remains inadequately defined. High-risk applications typically include those deployed in critical infrastructure, finance, credit scoring, insurance, product safety, consumer rights, law enforcement, and judicial proceedings. India should adopt a scientific and iterative approach to regulating high-risk use cases. Risk assessment in India must consider not only potential harm but also factors such as the context of AI deployment, consumer awareness levels, and digital literacy rates. Competition Commission of India market studies could help identify market failures within the AI ecosystem.[10]

VI. AI Policy Roadmap for India

India requires a clear, comprehensive plan to manage the risks and benefits of artificial intelligence. The following framework outlines essential steps Indian policymakers should take to establish robust AI governance:

1. Understand AI Risks and Benefits: Officials must develop a nuanced understanding of both AI’s potential and its limitations before formulating regulations. India should conduct a thorough review of how AI is currently deployed domestically, similar to how U.S. agencies report on AI usage and associated challenges. The Principal Scientific Adviser should lead comprehensive assessments of India’s AI market to identify failures and improvement opportunities. These evaluations should inform better policies, increase stakeholder awareness, and provide training for all participants in the AI ecosystem.

2. Classify AI Applications by Risk Level: The government should identify which AI applications pose the greatest dangers and require strict regulatory oversight. High-risk AI applications—such as those in healthcare, finance, and law enforcement—demand close monitoring. Multiple government departments must collaborate to assess these risks, including the ministries of Science and Technology, Consumer Affairs, Health, and Information Technology. The Parliamentary Standing Committee on Communications and Information Technology can provide oversight, while the proposed AI Safety Institute should conduct technical risk assessments.

3. Identify and Address Legislative Gaps: Policymakers should systematically examine areas where current laws inadequately address AI-related challenges. For instance, while some concerns such as employment displacement or artificial general intelligence may not require immediate legislative intervention, issues like deepfakes, market failures, and algorithmic discrimination demand urgent attention. Experts from government, legal academia, and industry should collaborate to identify and remedy these gaps.

4. Promote Self-Regulation: India should encourage companies to adopt voluntary, principle-based guidelines for AI rather than relying exclusively on government mandates. Regulators and private sector entities can collaborate to establish standards, share knowledge, and adapt to technological developments. Businesses should commit to making their AI systems safe, accurate, and privacy-respecting. Industry associations can develop voluntary codes of conduct for different sectors, including healthcare, finance, and education.

5. Strengthen Government Enforcement Powers: The government must possess adequate authority to address problems caused by AI systems promptly. This includes ensuring that all AI industry participants understand their roles and responsibilities. Citizens should receive new rights regarding AI systems that affect them, and companies must be held accountable when problems arise through mechanisms such as mandatory audits and human oversight requirements. The government must ensure effective enforcement of all regulations.

6. Adopt a Whole-of-Government Approach: AI regulations should be developed and enforced by all relevant ministries and regulatory bodies, including the Reserve Bank of India (RBI), TRAI, and the Ministry of Health. The Ministry of Electronics and Information Technology (MeitY) should establish foundational rules applicable across all sectors and promote voluntary codes of conduct. The Prime Minister’s Office or the National Security Council Secretariat should coordinate AI governance across agencies. A national AI Safety Institute should provide unbiased technical expertise and facilitate international collaboration among AI experts.

7. Engage Multidisciplinary Experts: As India discusses new legislation such as the proposed Digital India Act, policymakers must include experts from diverse fields in the conversation. Future policies should be shaped by comprehensive debates about fairness, privacy, security, and intellectual property rights, with input from technologists, lawyers, ethicists, social scientists, and civil society representatives.[11]

By implementing these measures, India can create a balanced and effective AI policy framework that encourages innovation while protecting citizens’ rights and safety.

VII. Conclusion

India has developed various concepts and strategies for AI policy; however, most remain unofficial or unimplemented. Significant gaps persist in how AI risks are assessed and managed. The regulations governing generative AI are complex and continuously evolving. As India enters this era of unprecedented technological advancement, addressing these legal challenges promptly and intelligently is increasingly critical. Technology and law must maintain ongoing dialogue so that legal systems can not only address current difficulties but also remain sufficiently robust to accommodate future developments. Finding an appropriate balance between technological innovation and ethical and legal safeguards will prove essential to creating a society where AI is both transformative and well-regulated.

As Elon Musk has observed: “We need to be super careful with AI. It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”

References:
[1] AZoRobotics, “The AI Regulatory Landscape in India: What to Know” (AZoRobotics, February 26, 2024), https://www.azorobotics.com/Article.aspx?ArticleID=742. ↩
[2] Id. ↩
[3] Siva Vignesh S.K.V. et al., Legal Challenges of Artificial Intelligence in India’s Cyber Law Framework: Examining Data Privacy and Algorithmic Accountability Via a Comparative Global Perspective, 6(6) INT’L J. FOR MULTIDISCIPLINARY RES. (2024), DOI:10.36948/ijfmr.2024.v06i06.31347. ↩
[4] Id. ↩
[5] Id. (citing Majumdar & Chattopadhyay, 2020). ↩
[6] Kshitija Kashik, Navigating the Future: India’s Strategic Approach to Artificial Intelligence Regulations, NATIONAL CENTRE FOR GOOD GOVERNANCE, https://ncgg.org.in/sites/default/files/lecturesdocument/Kshitija_Kashik__Research_Paper.pdf. ↩
[7] Next IAS Current Affairs Team, “Approach to Regulating AI in India,” NEXT IAS (April 16, 2024), https://www.nextias.com/ca/editorial-analysis/16-04-2025/regulating-ai-india-approach. ↩
[8] Id. ↩
[9] Law and Ethics In Tech, “The Legal and Ethical Challenges of AI in the Financial Sector: Lessons from BIS Insights,” MEDIUM (January 3, 2024), https://lawnethicsintech.medium.com/the-legal-and-ethical-challenges-of-ai-in-the-financial-sector-lessons-from-bis-insights-129c9d46f9a4. ↩
[10] Amlan Mohanty & Shatakratu Sahu, “India’s Advance on AI Regulation,” CARNEGIE ENDOWMENT FOR INTERNATIONAL PEACE (November 2024), https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en. ↩
[11] Id. ↩

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top