Published on: 04th October 2025
Authored By: Revant Upadhyay
Aligarh Muslim University
Abstract
Artificial Intelligence (AI) is rapidly transforming societies and economies, creating unprecedented opportunities and risks. For India, a fast-growing technology hub, AI promises significant benefits in healthcare, agriculture, and finance, but also poses governance challenges. Currently, India has no dedicated AI legislation,[^1] though policy efforts (e.g., NITI Aayog’s National Strategy 2018) emphasize ethical, inclusive AI development. This article reviews India’s constitutional and legal framework as it pertains to AI, surveys relevant case law, compares global regulatory approaches, identifies statutory gaps, and offers recommendations. It argues that India must balance innovation with fundamental rights such as privacy and equality by forging clear standards and institutions for AI governance.
Introduction
AI technologies including machine learning and large language models are advancing at breakneck speed, touching everyday life and commerce. Their use by governments, businesses, and individuals can enable efficiency and social good, but also raises concerns over privacy, bias, and autonomy. Some countries have moved swiftly to legislate AI (the European Union and China, for example) or issue ethics guidelines (as in the United States and through UNESCO). India, meanwhile, has been largely reactive, focusing first on data rights and sectoral norms.
India stands at a critical juncture. Its vibrant technology sector and growing AI ecosystem offer great promise, yet without clear legal frameworks, fundamental rights and public trust could be at risk. This article provides a systematic legal perspective on AI governance in India, examining the constitutional foundations, existing statutory provisions, emerging case law, and comparative global models. It then identifies key legal challenges and proposes a roadmap for comprehensive AI regulation that respects both innovation and constitutional values.
Constitutional and Statutory Framework
The Constitution
India’s Constitution is formally technology-neutral, but its fundamental rights constrain AI deployment. The Supreme Court in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) held that privacy, including informational privacy is intrinsic to Article 21’s right to life and personal liberty.[^2] This landmark ruling establishes that any AI system collecting or inferring personal data must respect this constitutional guarantee. Articles 14 (equality before law) and 19(1)(a) (freedom of speech and expression) similarly limit AI applications that discriminate or unduly restrict expression.
Information Technology Act, 2000
This primary cyberlaw addresses offenses like hacking and unauthorized access, and provides limited privacy safeguards through intermediary liability provisions. However, it contains no provisions specifically targeting AI systems or algorithmic decision-making.
Digital Personal Data Protection Act, 2023
India’s first comprehensive data protection law mandates informed consent for processing of personal data and grants individuals rights to access, correct, and erase their data.[^3] It establishes a Data Protection Board to enforce privacy rules. However, the Act contains broad exemptions for government and security agencies, which may create gaps in protection when public authorities deploy AI systems.
Sectoral Regulations
Regulators in finance, telecommunications, healthcare, and other sectors have issued guidelines impacting AI applications, for instance, algorithmic trading rules for stock exchanges and guidelines for telemedicine platforms. However, these efforts remain fragmented, and there is no single unified AI law coordinating these various sectoral approaches.
Government Policies
NITI Aayog’s National Strategy for Artificial Intelligence (2018)[^4] and its Responsible AI for All vision (2021)[^5] envisage a framework of ethical, inclusive AI. These policy documents emphasize fairness, accountability, and transparency in AI design. While comprehensive and forward-thinking, they remain non-binding and aspirational rather than legally enforceable.
Draft Digital India Act
A proposed omnibus law (still pending as of early 2025) is reported to include provisions on AI regulation. Media reports suggest it would ban certain “high-risk” AI uses, such as social credit scoring systems—but details remain unavailable for public scrutiny.
Judicial Interpretation
Indian courts have yet to confront cases specifically centered on AI systems or algorithmic accountability. Instead, existing jurisprudence on fundamental rights provides interpretive guidance for future AI-related disputes.
In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), the Supreme Court declared that privacy, including informational privacy, is an intrinsic element of the right to life under Article 21.[^6] This suggests that any AI system collecting, processing, or inferring personal data must respect this constitutional right. The Court’s emphasis on dignity and autonomy provides a framework for challenging AI applications that undermine individual agency.
By analogous reasoning, Article 14’s equality guarantee would prohibit discriminatory AI outcomes—for instance, biased algorithms in credit scoring, hiring, or criminal justice that perpetuate caste, gender, or religious discrimination. Similarly, Article 19(1)(a) would protect free expression from undue censorship by automated content moderation tools or government surveillance systems.
So far, no case has directly addressed algorithmic accountability, explainability, or bias in AI decision-making. However, AI-related disputes are emerging under traditional laws. For example, in 2025, a coalition of media companies sued the developer of ChatGPT, arguing that training the model on copyrighted news content without authorization violated the Indian Copyright Act.[^7] Such litigation signals that courts will soon need to clarify how existing intellectual property, contract, and tort laws apply to AI technologies. The outcomes of these early cases will shape India’s AI jurisprudence for years to come.
Comparative Global Models
European Union
The EU leads globally with a detailed, rights-based regulatory framework. Its AI Act (finalized in 2024) classifies AI systems by risk level.[^8] It bans “unacceptable” AI applications, such as state-run social scoring and indiscriminate biometric surveillance in public spaces and imposes strict requirements on “high-risk” systems. These requirements include mandatory risk assessments, use of high-quality training datasets, human oversight mechanisms, and transparency obligations. This approach explicitly ties AI regulation to fundamental rights protection, reflecting the EU’s precautionary regulatory philosophy.
United States
There is currently no comprehensive federal AI law in the United States.[^9] Instead, existing statutes covering civil rights, consumer protection, aviation safety, and healthcare, apply to AI applications as they arise. Federal agencies (FTC, FAA, FDA) issue guidance for specific sectors, and frameworks like NIST’s AI Risk Management Framework provide voluntary standards. Critics describe this as a patchwork approach that may leave gaps. The White House has issued executive orders on AI governance, and Congress has proposed various AI bills and the establishment of an AI Office, but these remain works in progress. The U.S. approach prioritizes innovation and flexibility over comprehensive ex-ante regulation.
United Kingdom
The UK emphasizes a flexible, pro-innovation regulatory approach. Its 2023 AI White Paper proposes a non-statutory, principles-based framework built on five principles: safety, transparency, fairness, accountability, and contestability.[^10] Rather than creating new AI-specific regulators, the UK model tasks existing sectoral regulators with applying these principles in their domains. The government has signaled it may later enact targeted legislation for the most powerful AI models. This contrasts with the EU’s prescriptive regime, reflecting the UK’s desire for industry-led solutions and regulatory agility.
International Principles
India is a signatory to global AI ethics frameworks that provide normative guidance. UNESCO’s 2021 Recommendation on the Ethics of AI and the OECD AI Principles (2019) stress human rights, transparency, accountability, and social well-being.[^11] They urge that AI systems have meaningful human oversight and treat all individuals with dignity. While not legally binding, these international norms influence India’s policy debates and civil society demands for responsible AI governance.
Legal Challenges
India’s current legal framework faces several critical challenges in addressing AI:
Privacy and Data Protection
AI systems often rely on massive datasets that may include sensitive personal information. Without a robust privacy regime rigorously enforced, individuals’ fundamental right to informational privacy as recognized in Puttaswamy, can be compromised.[^12] Even with the Digital Personal Data Protection Act in place, broad exemptions for law enforcement and national security may allow large-scale data processing without meaningful consent or oversight. This poses constitutional concerns, particularly when government agencies deploy AI for surveillance or predictive policing.
Bias and Discrimination
AI models trained on historical data can perpetuate and amplify societal biases related to caste, gender, religion, and other protected characteristics. In India’s diverse and historically stratified society, unregulated AI systems could entrench inequality or produce unjust outcomes in critical domains like credit scoring, employment screening, or criminal sentencing. While the Constitution’s equality guarantee under Article 14 is robust in principle, it lacks explicit mechanisms for algorithmic accountability. Biased AI outcomes might go unchallenged unless new safeguards such as mandatory bias audits or impact assessments—are established.
Accountability and Liability
Existing law is unclear on who bears responsibility when AI causes harm. If an autonomous AI system such as a driverless vehicle or medical diagnostic tool injures someone, it is uncertain whether liability falls on the developer, the deployer, the user, or some combination thereof. Traditional negligence and product liability doctrines may not fit neatly with AI systems that learn and evolve post-deployment. Some jurisdictions are exploring negligence-based or strict liability regimes specifically for AI; India has no such rules, potentially leaving victims without effective redress and creating uncertainty for innovators.
Transparency and Explainability
Many AI systems, especially those using deep learning, operate as “black boxes” their internal decision-making processes are opaque even to their creators. There is no legal guarantee in India that individuals affected by automated decisions can demand an explanation of how those decisions were reached. This lack of transparency undermines accountability, makes it difficult to identify bias or errors, and can erode public trust. The EU’s regulations introduce a form of “right to explanation” for high-risk AI decisions, but India currently has no parallel provision.
Security and Cybersecurity
AI systems can be vulnerable to hacking, data poisoning, or adversarial attacks, maliciously crafted inputs designed to cause errors or manipulate outputs. India’s current cyber laws, primarily the IT Act, do not specifically mandate robustness standards, secure development practices, or regular security audits for AI systems. Ensuring that AI platforms are resilient against attacks and misuse is an increasingly urgent challenge, particularly for systems deployed in critical infrastructure or national security contexts.
Ethical and Societal Risks
AI tools raise profound ethical dilemmas that extend beyond traditional legal categories. These include autonomous weapons systems, invasive surveillance technologies, and deepfake-generated disinformation. For example, widespread deployment of facial recognition technology could infringe privacy rights and chill free assembly and expression. International ethics norms insist that AI must uphold human dignity and equality,[^13] but Indian regulation has yet to translate those ideals into enforceable rules or clear red lines for unacceptable AI applications.
Intellectual Property
The rise of generative AI has triggered copyright controversies. In India, content creators have sued AI developers for allegedly using copyrighted articles and creative works to train language models without authorization.[^14] Courts will have to decide whether such uses constitute copyright infringement or qualify as fair dealing under Indian law. The outcome will significantly affect content industries, journalism, and AI innovation. Beyond copyright, questions also arise about whether AI-generated works can be protected, and who owns them.
Regulatory Capacity
Finally, Indian regulators and courts have limited technical expertise in AI and machine learning. Crafting agile laws that keep pace with rapidly evolving technology is inherently difficult. There is a danger of either overregulation stifling innovation and competitiveness or under-enforcement, allowing harms to proliferate unchecked. Building institutional capacity, fostering technical literacy among policymakers and judges, and creating mechanisms for adaptive regulation are essential but challenging tasks.
Recommendations
To address these challenges and position India as a leader in responsible AI governance, the following reforms should be considered:
1. Enact a Comprehensive AI Framework
India should adopt a risk-based AI law or empower an existing authority to oversee AI governance. This framework should be principles-driven, aligned with constitutional values (privacy, equality, dignity), and sufficiently flexible to accommodate technological evolution. It could delegate specific responsibilities to sectoral regulators for high-risk domains such as healthcare, finance, and criminal justice. As NITI Aayog recommends, such a framework should be complemented by strong data protection rules and sector-specific standards.[^15] The framework might establish an AI authority or commission with powers to issue binding guidelines, conduct audits, and impose penalties for non-compliance.
2. Strengthen Data Protection and Oversight
India must fully implement the Digital Personal Data Protection Act with adequate resources and institutional capacity. The Act’s exemptions should be narrowed to ensure that any processing of sensitive personal data, especially by government agencies requires genuine legal justification and independent oversight. High-risk AI projects should be required to conduct data protection impact assessments before deployment. The Data Protection Board should be empowered with sufficient authority and resources to investigate violations, issue corrective orders, and impose meaningful penalties to deter non-compliance.
3. Mandate Sectoral Safeguards
Targeted, sector-specific guidelines should be developed for AI deployment in critical areas. For example:
- Financial regulators could require explainability for algorithmic credit scoring and mandatory third-party audits of lending algorithms.
- Health authorities could establish certification processes for AI diagnostic tools, requiring clinical validation before deployment.
- Law enforcement agencies could face restrictions on unregulated biometric surveillance and predictive policing tools, with mandatory judicial oversight.
This targeted approach, which mirrors efforts in other jurisdictions, addresses sector-specific risks while allowing innovation in lower-risk domains.
4. Require Transparency and Human Oversight
India should introduce a legal “right to explanation” for automated decisions that significantly affect individuals, including decisions about credit, employment, education, or criminal justice. AI developers and deployers should be obliged to build in human-in-the-loop oversight mechanisms and maintain detailed audit trails of algorithmic decisions. These transparency measures echo GDPR safeguards and UNESCO’s call for human-centric AI.[^16] They would enable affected individuals to challenge unfair decisions and allow regulators to investigate complaints effectively.
5. Clarify Liability Rules
Clear rules should be established for assigning liability when AI systems cause harm. One approach is to treat software developers and deployers as potentially liable if they fail to meet reasonable standards of care in design, testing, and deployment. The law could mandate insurance or certification schemes for high-risk AI systems, similar to requirements for motor vehicles or medical devices. Additionally, a statutory framework for product liability tailored to AI could provide victims with clearer pathways to compensation. Clear liability rules will incentivize safety practices and provide redress for those harmed.
6. Promote Standards and Ethics
The government should encourage adoption of international AI standards (such as ISO/IEC frameworks for AI management and trustworthiness) and ethical guidelines. Supporting industry-led certification schemes or establishing independent ethics boards can promote compliance with best practices. The NITI strategy advises benchmarking India’s approach against global norms,[^17] and harmonization with international standards can facilitate cross-border data flows and cooperation while maintaining high ethical standards.
7. Establish Regulatory Sandboxes
India should create regulatory sandboxes, safe testing environments where companies can trial innovative AI applications under regulatory supervision, similar to fintech sandboxes already operating in India. This approach would help regulators learn about emerging technologies and potential risks in real-world conditions before formal deployment at scale. Sandboxes allow innovation to proceed with appropriate oversight, enable iterative refinement of regulations, and build trust between regulators and industry.
8. Build Capacity and Foster Public Engagement
Substantial investment is needed in training for regulators, judges, law enforcement, and policymakers on AI concepts and implications. Educational programs, workshops, and secondments with technology firms or academic institutions can build this expertise. Policy development should engage academia, industry, and civil society through public consultations and multi-stakeholder dialogues. Public awareness campaigns should inform citizens of their rights in the AI era and how to exercise them. International collaboration with bodies like the OECD, UNESCO, and bilateral partnerships can help India shape effective, globally interoperable AI norms while preserving policy autonomy.
Conclusion
India stands at a crossroads in AI governance. The opportunities presented by AI from improving healthcare delivery in rural areas to optimizing agricultural yields, are immense. Yet without clear legal frameworks, fundamental rights and public trust could be at risk. This article has demonstrated that while existing laws on data protection, consumer rights, and cyber offenses cover some aspects of AI, significant gaps remain in areas such as algorithmic accountability, bias mitigation, transparency, and liability.
Looking ahead, India needs a balanced, rights-respecting approach: one that avoids stifling innovation while safeguarding privacy, equality, and the public interest. Thoughtful legal and policy reforms from robust privacy enforcement to targeted AI standards and institutional capacity-building will allow India to harness AI’s transformative benefits for development while upholding constitutional values.
The road ahead requires collaboration among government, industry, civil society, and the judiciary. It demands vigilance against emerging risks and adaptability as technology evolves. By acting decisively and thoughtfully, India can position itself not merely as an AI consumer but as a global leader in responsible AI governance, setting an example for how democracies can embrace innovation while protecting human dignity and rights.
References
[^1]: As of early 2025, India has no dedicated AI-specific legislation, though various policy initiatives and sectoral guidelines address aspects of AI governance.
[^2]: Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
[^3]: Anirudh Burman, Understanding India’s New Data Protection Law (Carnegie Endowment for International Peace, 3 October 2023).
[^4]: NITI Aayog, National Strategy for Artificial Intelligence: #AIForAll (Government of India, 2018).
[^5]: NITI Aayog, Responsible AI for All: Vision Document (Government of India, 2021).
[^6]: Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
[^7]: Arpan Chaturvedi, ‘India panel to review copyright law amid legal challenges to OpenAI’ Reuters (6 May 2025).
[^8]: European Commission, Proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act) COM(2021) 206 final.
[^9]: White & Case LLP, AI Watch: Global regulatory tracker—United States (Insight, 31 March 2025).
[^10]: UK Department for Science, Innovation & Technology, AI Regulation: A Pro-Innovation Approach (White Paper, March 2023).
[^11]: UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021); OECD, OECD Principles on AI (2019).
[^12]: Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
[^13]: UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).
[^14]: Arpan Chaturvedi, ‘India panel to review copyright law amid legal challenges to OpenAI’ Reuters (6 May 2025).
[^15]: NITI Aayog, National Strategy for Artificial Intelligence: #AIForAll (Government of India, 2018).
[^16]: UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).
[^17]: NITI Aayog, National Strategy for Artificial Intelligence: #AIForAll (Government of India, 2018).
Â