Sandboxes to statutes: Building India’s AI playbook

Published on: 12th January 2026

AUTHORED BY: SHUBHAM TANDON
MAHARSHI DAYANAND UNIVERSITY

Abstract

This​‍​‌‍​‍‌​‍​‌‍​‍‌ article offers a detailed legal study on the regulation of artificial intelligence (AI) in India. It places AI governance in India’s present statutory and policy framework, identifies the main legal challenges—liability, transparency, data protection, discrimination, competition, and safety—analyses different international regulatory models, and suggests a risk-based regulatory framework suitable to Indian institutional capacities and development goals. The article serves as a draft manual, providing examples of statutory language, a template for Algorithmic Impact Assessment (AIA), and a practical compliance checklist for practitioners. The article ends with a set of prioritized recommendations for policymakers, regulators, courts, and civil society that are looking for a feasible, rights-protective, and innovation-riendly AI regime.

Keywords

 Artificial Intelligence; India; AI regulation; Digital Personal Data Protection Act, 2023; algorithmic transparency; liability; algorithmic impact assessment; National AI Commission; high risk AI; data governance.

I. Introduction

Artificial intelligence is no longer a concept confined to experimental labs but has gradually become the main decision-maker in various fields like public administration, finance, health care, education, and infrastructure. Such widespread use poses a governance paradox: On the one hand, AI promises great efficiency and the possibility of providing scalable services that are inclusive of all; on the other hand, it also entails quite a few risks such as the inability to understand how decisions are made by automated systems, violation of privacy, bias in algorithms, failure of safety at a systemic level, and even the gaining of market power by a few[1].

India’s socio-legal setting, i.e., having large and diverse populations, a digital public infrastructure, and numerous development needs, makes the issue of governance exceptionally important. In response, the law has to not only protect individual rights and public safety, but at the same time also allow for inclusive innovation to happen and ensure that competition and democratic accountability are ‍​‌‍​‍‌​‍​‌‍​‍‌maintained.

II.​‍​‌Current Legal and Policy Conditions in India

At present, India does not have a single, comprehensive horizontal law for AI. Rather, the governance of AI is based on various instruments: sectoral regulatory frameworks (financial, telecom, insurance, health), the Information Technology Act, 2000 along with its regulations, the Digital Personal Data Protection Act, 2023 (DPDP Act), and non-binding government guidelines and policy declarations. In 2025, the Government of India released AI management guidelines that put the main emphasis on human-centric AI, accountability, and inclusion, thus reflecting an endeavor to shape a coherent policy response while enforcement is left to existing laws and sectoral regulators.[2]

The DPDP Act, 2023 is vital to AI regulation in India as the data is the fundamental resource for the majority of the AI systems with a high impact[3]. The Act outlines the duties of fiduciaries and the rights of individuals (data principals), however, it still has significant unanswered questions about the use of AI – in particular, the legal grounds for large-scale model training, anonymization and re-identification issues, and cross-border data flow for model development. Administrative experiments – like pilot e-governance projects, and court assistance tools – have revealed concerns related to the delegation of authority, procedural fairness, and due process when algorithmic outputs have an impact on legal rights.

III. Core Legal Challenges

  1. Liability and Access to Remedy

The AI’s technical features – such as the opacity of complex models, multiple actors in development and deployment chains, and continuous learning post deployment – put stress on different legal categories like negligence, product liability, vicarious liability that exist now. A victim-friendly remedial framework should be able to tackle attribution difficulties and evidentiary gaps, yet at the same time it should not provide for absolute liability which discourages beneficial deployments. The idea of a layered liability model which designates the deployers as primarily responsible, developers as secondarily responsible for defects in model design or training that cause harm and thus requiring them to provide insurance or set up compensation mechanisms for catastrophic harms is a practical way to deal with the problem.

  1. Transparency, Explainability, and Procedural Fairness

As per the administrative law principles, when AI-based systems are used for decisions that have a significant impact on individuals—like access to welfare, employment, credit, or liberty—transparency, explanation, and human review should be ensured. The policy statements that prescribe “human oversight” should be put into practice: the affected persons should be informed, explanations provided for the automated decisions, the human review procedures should be clearly defined, and the right to appeal and get correction should be ​‍​‌‍​‍‌​‍​‌‍​‍‌there.[4]

  1. Bias and Equality

AI​‍​‌‍​‍‌​‍​‌‍​‍‌ systems if based on biased data sets, can reproduce and escalate societal inequalities. The Indian constitution backed by laws such as equal protection under Article 14, non-discrimination through Article 15 read with statutory schemes demands legal instruments that not only detect but also eliminate such disparate impacts. Algorithmic impact assessments, regular audits (both internal as well as independent), and representative dataset obligations can help in the practical implementation of equality provisions.

  1. Data Protection and Privacy

AI, given its need for vast and varied datasets, is at the intersection of the DPDP Act and other obligations.[5] Some of the major questions are: what would be sufficient anonymization for model training; how can we allow the lawful reuse of datasets for research and model improvement going hand in hand with data principals’ rights; and what would be the mechanism for authorizing cross-border transfers of datasets used for the training of large models while maintaining regulatory access and enforcement jurisdiction?

  1. Market Concentration and Competition

The major authoritative models and platforms mostly are the winners of network effects and economies of scale in data and computing power. Competition law is in need of revamping to deal with data-driven dominance: solutions might involve opening up foundation models for the use of local innovators, implementing data portability mandates, and the rigorous examination of mergers that eliminate or absorb nascent competitors, to prevent foreclosure of emerging competition.[6]

  1. Safety, Standards, and “High-Risk” Uses

Inherent risks or disadvantages of AI systems—such as self-driving cars, coordinating critical infrastructure, predictive policing, clinical decision aids—can be safety risks or cause rights violations on a large scale. To mitigate such risks, it is important to have a risk-based regulatory strategy that specifically identifies high-risk activities, and thus introduces regulations with regard to pre-deployment certifications, on-going supervision, and mandatory incident reporting, while simultaneously allowing low-risk activities to continue without unnecessary restrictions.

IV. Comparative Regulatory Approaches: Lessons for India

The AI risk-based Act of the EU puts in place requirements for a high-risk system, having severe limitations on certain cases and strict compliance rules.[7] Despite its all-encompassing nature, it is also fairly challenging from the administrative point of view. The United Kingdom on the other hand prefers more of a sector-based coordination model supported by non-binding codes focusing on the facilitation of innovation. The United States is dependent on sectoral regulators and private governance initiatives. For India, pursuing the best course means fine-tuning a hybrid: it should have a central statutory backbone, which besides laying down the principles and definitions, also allows for delegated rulemaking with regard to the technical standards; sectoral regulators can then be entrusted with enforcement; the sandboxes approach and gradual rollout can be utilized for the purpose of capacity building and ‍​‌‍​‍‌​‍​‌‍​‍‌testing interventions .

V.​‍​‌‍​‍‌​‍​‌‍​‍‌ Calibrated, Risk-Based Regulatory Architecture for India

Principles

  • Human centricity: A focus on keeping humans in control and respecting their dignity.
  • Rights protection: Emphasize privacy, non-discrimination, and procedural fairness.
  • Proportionality: More onerous requirements for higher risk AI systems.
  • Innovation and inclusion: Provide lighter requirements for startups and the use of AI for the public good.
  • Transparency and contestability: Support by the audit trail, complaint handling and public scrutiny.

Institutional Design

  1. National AI Commission (statutory): a body that coordinates sectoral regulators, defines minimum technical standards, supervises a register of high-risk AI, issues binding delegated regulations on technical safety and transparency, and publishes model guidance and sectoral checklists.
  2. Sectoral Regulators: Keep the power of implementation over AI applications in specific domains (RBI for financial AI, TRAI for telecom, health regulators for clinical AI), allowed to implement baseline standards set by the Commission.
  3. Regulatory Sandboxes and a Public Interest Model Repository: Mechanisms for innovation under control with monitoring and end provisions; a freely available collection of public benefit models and representative datasets to reduce dependence on global proprietary models and to encourage local innovation.

Regulatory Instruments

  • High Risk List and AIAs: The Commission should publish notified high-risk categories that trigger mandatory AIAs, third-party audits, and certification. AIAs should involve providing purpose, dataset source, fairness/bias risk evaluation, and bias mitigation measures.
  • Layered Liability: The main accountability of deployers; the developers’ joint responsibility if negligence in the model design or the filtering that can foreseeably cause harm; legal presumptions facilitating victims’ burdens in clearly defined cases; compulsory liability insurance or pooled compensation funds for catastrophic harms.
  • Transparency Duties: Informing when significant decisions are taken by an automated system; providing understandable explanations of the decision rationale; giving ways for human review and appeal.
  • Data Governance Adjustments: Define DPDP Act interfaces-permitted legal grounds for model training with strong anonymization and governance safeguards; regulated cross-border transfer routes; retention and deletion standards for model training datasets.[8]
  • Certification and Conformity Assessment: Create accredited conformity assessment bodies to certify high-risk systems; require model cards and datasheets for provenance and auditability.

Enforcement and Capacity Building

Put in place staged rollouts that concentrate mainly on high-risk sectors at the start. Interdisciplinary technical units within the regulators, public-private partnerships for standard setting, and training programs for judiciary and investigators in algorithmic literacy should all be funded. Also, encourage accredited third-party auditors and independent testing ‍​‌‍​‍‌​‍​‌‍​‍‌labs.[9]

VI.​‍​‌‍​‍‌​‍​‌‍​‍‌ Drafting Roadmap and Illustrative Provisions

It would be better to have a modular statute:

Part I—definitions and principles;

Part II—National AI Commission;

Part III—risk-based obligations;

Part IV—liability, remedies, and enforcement;

Part V—transitional and savings clauses consistent with DPDP Act obligations.

Illustrative definition clause: “High-risk AI system” refers to an AI system that is used to make or assist in Life , Liberty, health or economic interests of an individual . It also includes public security functions, or any other category that is notified by the “National AI Commission”.[10]

Illustrative liability clause:

“In the event that a high-risk AI system operates and causes loss or damage, the deployer will be primarily liable to the person experiencing such loss, subject to the deployer’s right to seek contribution from the developer in cases where the loss is attributable to defects in model design, training data, or failure to adhere to applicable certification standards.”[11]

A rebuttable presumption of causal linkage will be applied in favor of the claimant in notified cases involving automated determinations that cannot be explained.

An Algorithmic Impact Assessment Template 

  • Description of the system and its functions.
  • Sources of data and the full cycle of data processing.
  • Stakeholder mapping and their rights.
  • Identify and assess the bias and disparate impact.
  • Safety and robustness assessment.
  • Human intervention and appeal rights.
  • Audit logs, records, and conformity ​‍​‌‍​‍‌​‍​‌‍​‍‌declarations.

VII.​‍​‌‍​‍‌​‍​‌‍​‍‌ Practical Implications and Compliance Checklist

For Deployers and Developers

  • Keep comprehensive records of the model (include the origin of the training data, hyper parameters, and update history).
  • Perform and publish AI Impact Assessments for systems in scope.
  • Use differential privacy, strong anonymization, and data minimization techniques.
  • Obtain liability insurance or be involved in a pooled fund for compensations.
  • Develop understandable notifications and implement a human review process for those impacted.

For Regulators

  • Create technical units having experts of different disciplines and accredit independent auditors.
  • Publish sectoral checklists and guidance that are in harmony with the National AI Commission.
  • Implement sandboxes for high-impact public-interest innovations together with waiver and monitoring conditions.

For Courts

  • Use neutral technical experts and create procedures for quick discovery of relevant model documentation and logs.
  • Influence the remedies that consist of injunctions, corrective audits, and damages proportionate to the loss.

For Civil Society and Academia

  • Fight for publicly beneficial datasets and independent supervision.
  • Help the strategic litigation that aims to clarify standards and accountability norms.[12]
  • Develop the local testing infrastructure and algorithmic fairness research that is suitable for India’s socio-cultural ​‍​‌‍​‍‌​‍​‌‍​‍‌diversity.

VIII.​‍​‌‍​‍‌​‍​‌‍​‍‌ Tradeoffs, Risks, and Implementation Challenges

Every regulatory design involves tradeoffs. Too detailed rules may limit the development of startups and lead to the monopolizing of big companies; lack of regulation may cause violations of rights and various types of damages. Other problems for implementation are limited technical skills of regulators, the need for harmonizing standards, and the costs of coordination among various authorities. The best opportunity for the iterative improvement is a phased, learning-oriented approach with strong public-private partnerships, transparent rulemaking, and close collaboration with civil society.

Conclusion and Recommendations

The problem of AI governance in India is to create a legal system that ensures freedom, controls risks, and promotes the innovation that benefits everyone. The advised way:

  1. Set up a National AI Commission by law with the power delegated to technical standard setting and notifying high-risk categories.
  2. Implement a risk-based framework containing mandatory AIAs, certification of high-risk systems, and layered liability rules which allow access to remedy without putting heavy burdens on innovators.
  3. Define clear interfaces of DPDP Act for AI model training, anonymization, and cross-border transfers, thus providing regulators with access and enforceability.
  4. Increase regulatory capacity with the help of technical units, accredited auditors, sandboxes, and judicial education.
  5. Support open public datasets and a public interest model repository in order to lessen the reliance on proprietary global models and enable the participation of the whole community in research.

By combining a principled statutory backbone with sectoral enforcement and capacity building, India has the potential to build a governance regime to:

  1. Protect rights
  2. Control systemic risk
  3. Preserve room for the innovation essential to the country’s ‍​‌‍​‍‌​‍​‌‍​‍‌development.

Acknowledgements

This article draws conceptually on India’s evolving AI governance materials and comparative regulatory scholarship emerging through 2024–2025, including the Government of India’s AI governance guidance and contemporary practitioner commentary.[13]

Selected References and Citations

  1. Ministry of Electronics & Information Technology (MeitY), India AI Governance Guidelines (Final Version 2025).
  2. NewMind AI, India Country Report: AI Policy and Regulations of India (May 17, 2025) (Comprehensive Report).
  3. Advocate Tanwar, AI Regulation in India: Towards a Legal Framework for Artificial Intelligence (Nov. 11, 2025) (blog post).
  4. Digital Personal Data Protection Act, 2023 (India).
  5. European Parliament and Council, Proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act) (2021) (and subsequent versions).
  6. OECD, Recommendation of the Council on Artificial Intelligence (2019).
  7. Lina Khan, The Case for Platform Regulation, Yale Journal on Regulation (select articles on data and competition).
  8. Selected academic works on algorithmic accountability, transparency, and impact assessments (e.g., M. Crawford & T. Paglen; R. K. Bhardwaj & S. Roy on algorithmic fairness in India).
  9. https://studyx.ai/questions/4m8dqy1/final-assessment-high-risk-ai-which-of-the-following-is-considered-a-high-risk-ai-system/

[1] OECD, Recommendation of the Council on Artificial Intelligence (2019)

[2]      Ministry of Electronics & Information Technology (MeitY), India AI Governance Guidelines (Final Version 2025).

[3]  Digital Personal Data Protection Act, 2023 (India)

[4] Ministry of Electronics & Information Technology (India), India AI Governance Guidelines (Final Version 2025).

[5] Digital Personal Data Protection Act, 2023 (India)

[6] Lina Khan, The Case for Platform Regulation, Yale Journal on Regulation (select articles on data and competition).

 

[7]  European Parliament and Council, Proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act) (2021) (and subsequent versions).

 

[8] Digital Personal Data Protection Act, 2023 (India)

[9] EU AI Act for conformity frameworks , European Parliament & Council (2021) (and subsequent versions)

[10] Advocate Tanwar, AI Regulation in India: Towards a Legal Framework for Artificial Intelligence (Nov. 11, 2025) (blog post).

[11] Advocate Tanwar, AI Regulation in India: Towards a Legal Framework for Artificial Intelligence (Nov. 11, 2025) (blog post).

[12] M. Crawford & T. Paglen, Selected academic works on algorithmic accountability; R.K. Bhardwaj & S. Roy, Algorithmic Fairness in India

[13] Ministry of Electronics & Information Technology (India), India AI Governance Guidelines (Final Version 2025); NewMind AI, India Country Report: AI Policy and Regulations of India (May 17, 2025); OECD,  Recommendation of the Council on Artificial Intelligence (2019); M. Crawford & T. Paglen, Selected academic works on algorithmic accountability.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top