Published on: 07th December 2025
Authored by: Archee Samaiya
Kle society's law college, Bangalore
Abstract
Artificial Intelligence (AI) has emerged as a transformative force across global economies, reshaping sectors such as healthcare, finance, education, and governance. Its rapid adoption in India underscores the need for a regulatory framework that balances innovation with ethical considerations and citizens’ rights. Despite its potential, AI presents significant legal and ethical challenges including accountability, algorithmic bias, data privacy concerns, intellectual property complexities, and employment implications. India’s current approach relies on policy initiatives like NITI Aayog’s National Strategy for Artificial Intelligence (2018) and legislation such as the Digital Personal Data Protection Act, 2023, yet lacks a comprehensive statutory framework. This article examines the legal landscape surrounding AI in India, identifies key regulatory gaps, compares international regulatory models, and proposes reforms to establish a human-centric and legally robust AI governance framework.
Keywords: Artificial Intelligence, Regulation, Legal Challenges, Data Protection, AI Ethics, India, Policy, Innovation
Introduction
Artificial Intelligence (AI) represents one of the most significant technological revolutions of the 21st century. Its integration into everyday life is evident in applications ranging from chatbots and virtual assistants to autonomous vehicles and predictive policing. In India, a nation at the forefront of the digital economy, AI adoption is accelerating under initiatives like Digital India and the National AI Strategy launched by NITI Aayog. According to a 2024 NASSCOM report, India’s AI industry is projected to contribute over USD 500 billion to the national economy by 2025.[1]
While AI promises efficiency, economic growth, and enhanced service delivery, it simultaneously raises complex legal questions. The absence of a dedicated AI law in India complicates issues related to liability, algorithmic accountability, data governance, intellectual property, and ethical compliance. Policymakers must therefore balance technological advancement with constitutional obligations under Articles 14, 19, and 21, which guarantee equality, freedom of expression, and protection of life and personal liberty.[2]
This article explores the current legal and policy frameworks governing AI in India, examines key challenges and gaps, and draws insights from international regulatory practices. It concludes with actionable recommendations for a comprehensive, human-centric AI regulatory framework suitable for India’s socio-economic context.
Background of AI Regulation in India
India’s approach to AI governance is primarily policy-driven, with limited statutory backing. Recognizing AI’s transformative potential, the government has sought to promote its adoption while attempting to address ethical, legal, and social concerns.
2.1 NITI Aayog’s National Strategy for AI (2018)
In 2018, NITI Aayog released the National Strategy for Artificial Intelligence, titled “AI for All”, outlining India’s vision for AI development.[3] The strategy identifies five priority sectors: healthcare, agriculture, education, smart cities, and smart mobility. It emphasizes:
Ethical AI adoption, ensuring transparency and fairness.
Inclusivity, promoting access to AI benefits across socio-economic groups.
Skill development, creating talent pipelines for AI research and deployment.
Although strategic in scope, the policy is non-binding and lacks the enforcement mechanisms typically associated with statutory frameworks. It relies on collaboration with private sector stakeholders, academic institutions, and state governments to promote AI development.
2.2 Digital Personal Data Protection Act, 2023
The Digital Personal Data Protection (DPDP) Act, 2023 represents India’s first comprehensive legislative effort to regulate the collection, processing, and storage of personal data.[4] For AI systems, which rely heavily on large datasets, the DPDP Act imposes obligations including:
Consent-based processing, requiring explicit permission from data principals.
Purpose limitation, ensuring data is used only for defined objectives.
Transparency and accountability, mandating clear disclosures regarding data usage.
While the DPDP Act is crucial for safeguarding data privacy, it does not fully address algorithmic accountability, bias mitigation, or explainability, which are central to AI governance.
2.3 Information Technology Act, 2000 (as amended)
The IT Act and its subsequent amendments provide foundational cybersecurity and data protection measures. Notably:
Section 43A: Imposes liability on companies for negligent handling of sensitive personal data.[5]
Section 72A: Criminalizes the disclosure of personal data without consent.
Additionally, the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 mandate transparency and content moderation by digital intermediaries, touching upon AI systems used in online platforms.[6] However, these provisions are indirectly related to AI and were not designed to address autonomous decision-making or algorithmic biases.
2.4 Bureau of Indian Standards – AI Standardization Initiatives
The Bureau of Indian Standards (BIS), in collaboration with the Ministry of Electronics and Information Technology (MeitY), has initiated work on AI standards to ensure interoperability, safety, and reliability.[7] While still in progress, these standards are expected to guide ethical AI development and provide a basis for future statutory regulation.
2.5 Judicial Recognition of Data and Algorithmic Rights
Indian courts have begun laying the groundwork for AI governance through landmark rulings:
Justice K.S. Puttaswamy v. Union of India (2017): Recognized privacy as a fundamental right, establishing the principle of informational self-determination.[8]
Shreya Singhal v. Union of India (2015): Emphasized freedom of speech in the digital domain, relevant to AI-driven content moderation.[9]
These judicial pronouncements underscore the constitutional dimensions of AI governance, particularly privacy, equality, and freedom of expression, which must inform future legislation.
Overview of AI Governance in India
India’s AI governance ecosystem is currently characterized by a policy-first approach rather than comprehensive statutory regulation. While initiatives such as NITI Aayog’s National AI Strategy and the DPDP Act provide foundational guidance, there is no singular, enforceable AI legislation. Instead, India relies on a combination of sector-specific laws, soft law measures, and standardization efforts to regulate AI applications. This fragmented framework has advantages in flexibility but creates challenges in accountability, consistency, and enforcement.
3.1 Sectoral Policy Frameworks
3.1.1 AI in Healthcare
Healthcare is among the most regulated sectors using AI in India. AI applications include diagnostic tools, predictive health analytics, and telemedicine platforms.
National Digital Health Mission (NDHM): Under NDHM, AI-driven tools are integrated into electronic health records and predictive disease management systems.[10]
Regulatory Oversight: The Ministry of Health and Family Welfare, in coordination with the Central Drugs Standard Control Organization (CDSCO), regulates AI-enabled medical devices. Any AI system used for diagnosis or treatment must comply with medical device standards and clinical validation protocols.[11]
Legal Implication: Misdiagnosis caused by AI tools raises questions of liability—whether it lies with software developers, healthcare providers, or device manufacturers.
3.1.2 AI in Finance
AI adoption in the financial sector, including credit scoring, fraud detection, and automated trading, is governed by sector-specific guidelines:
Reserve Bank of India (RBI) Guidelines: The RBI mandates transparency, auditability, and accountability for AI algorithms used in credit scoring and loan disbursal.[12]Securities and Exchange Board of India (SEBI) Regulations: SEBI requires AI-driven trading platforms to implement safeguards to prevent market manipulation and systemic risk.[13]
While these sectoral rules promote responsible AI use, they are reactive and confined to specific industries, leaving broader societal implications unaddressed.
3.1.3 AI in Governance and Smart Cities
AI is being deployed in Indian governance for traffic management, predictive policing, and public service delivery:
Smart Cities Mission: AI tools assist in urban planning, traffic optimization, and waste management.[14]
Predictive Policing Initiatives: Certain states have piloted AI-driven tools to identify crime hotspots. However, concerns over privacy infringement, bias, and transparency persist.[15]
The absence of clear legal mandates for AI in governance increases the risk of abuse and limits public trust in these technologies.
3.2 Regulatory Guidelines and Initiatives
3.2.1 MeitY AI Guidelines
The Ministry of Electronics and Information Technology (MeitY) has released guidelines for AI adoption in public sector projects, emphasizing:
Data privacy compliance.
Algorithmic transparency.
Ethical use of AI for societal benefit.[16]
These guidelines are advisory and not legally enforceable, but they serve as a roadmap for public institutions and private partners engaging in AI projects.
3.2.2 Data Protection as a Regulatory Tool
The DPDP Act, 2023 forms the core regulatory instrument for AI-related data practices:
Consent and Purpose Limitation: AI systems must operate on legally obtained data and respect the declared purpose.[17]
Data Minimization: Algorithms should only process data essential for their function, limiting risks of misuse.
Accountability: Entities using AI for automated decision-making must maintain audit trails and document compliance.
Although significant, the DPDP Act does not fully cover algorithmic transparency, explainability, or bias mitigation, which are central to AI governance.[18]
3.3 Judicial Oversight and Emerging Legal Principles
Indian courts have laid the foundation for AI regulation indirectly through privacy and digital rights jurisprudence:
Justice K.S. Puttaswamy v. Union of India (2017): Establishes the right to informational self-determination, mandating that individuals have control over personal data.[19]
Shreya Singhal v. Union of India (2015): Affirms freedom of speech in the digital realm, relevant for AI-driven content moderation and automated decision-making.[20]
Judicial pronouncements indicate that India is moving toward constitutionalized AI governance, where AI applications must align with fundamental rights principles.
3.4 Standardization Efforts
Bureau of Indian Standards (BIS)
BIS is working on AI standardization to ensure safety, interoperability, and reliability of AI systems:
Draft standards focus on ethics, transparency, accountability, and robustness of AI algorithms.[21]
These standards will provide a reference framework for public and private AI deployment.
While promising, these efforts are non-binding, highlighting the need for statutory enforcement mechanisms.
Sectoral AI Compliance Committees
Several ministries, including MeitY and the Department of Science & Technology (DST), are creating committees to review AI projects in critical sectors (e.g., finance, healthcare, transport) to ensure ethical compliance and alignment with policy frameworks.[22]
3.5 Challenges with India’s Current Regulatory Framework
Despite multiple initiatives, the current framework faces significant limitations:
Fragmentation: Policies are dispersed across sectors, leading to inconsistent enforcement and accountability.
Non-binding Guidelines: Many directives are advisory, lacking statutory force.
Limited Liability Provisions: There is no clear legal mechanism to assign responsibility for AI-driven errors.
Insufficient Coverage of Ethical Dimensions: Bias, fairness, and explainability are not adequately regulated.
International Alignment: India lacks harmonization with global AI regulations like the EU AI Act, which may impede international collaboration and standardization.
3.6 Recent Government Initiatives for AI Governance
To bridge these gaps, India has undertaken several initiatives:
Responsible AI for Youth Program: Training students and professionals on AI ethics and governance.[23]
Centre for Responsible AI Research: Aims to develop ethical AI models, standards, and compliance tools.
AI Regulatory Sandbox Proposal: MeitY is exploring the creation of sandbox environments for testing AI systems under supervision before full-scale deployment.[24]
These steps indicate that India is gradually moving toward a regulated and ethical AI ecosystem, though comprehensive legislation remains absent.
Legal Challenges in AI Regulation in India
Artificial Intelligence, while revolutionary, introduces complex legal and ethical dilemmas. India’s current regulatory framework partially addresses these concerns but leaves significant gaps, particularly around liability, bias, data protection, intellectual property, and employment implications.
4.1 Accountability and Liability of AI Systems
AI systems can operate autonomously, making decisions without human intervention. This raises questions about who bears legal responsibility when an AI system causes harm. Indian law currently recognizes liability only for natural and juristic persons, leaving AI systems unaccountable.
For instance, in the context of autonomous vehicles, if an accident occurs due to a self-driving car, liability could involve:
The manufacturer of the vehicle,
The software developer, or
The user or owner of the system.
Without clear statutory guidance, victims may face hurdles in seeking remedies. Scholars suggest adopting a tiered liability framework, where responsibility is distributed according to control, foreseeability, and risk assessment.[25]
4.2 Algorithmic Bias and Discrimination
AI systems are prone to bias, often reflecting the prejudices present in the data used for training algorithms. Examples include facial recognition misidentifying individuals from certain demographic groups or credit scoring systems disadvantaging marginalized communities.
Under the Indian Constitution, such outcomes may violate Article 14 (Equality before law) and Article 15 (Non-discrimination). However, no explicit legal provisions currently regulate algorithmic fairness or mandate audits for bias. Ethical AI frameworks are still largely advisory, leaving enforcement uncertain.[26]
4.3 Data Privacy and Consent
AI applications rely on large datasets, often containing personal information. While the DPDP Act, 2023 establishes a framework for consent, purpose limitation, and data minimization, AI’s automated decision-making processes complicate compliance.
Profiling and predictive analytics may use aggregated or anonymized data, raising questions about indirect privacy infringements.
Explainability remains a concern: users must be able to understand AI-driven decisions affecting them.
Privacy jurisprudence, particularly Justice K.S. Puttaswamy v. Union of India (2017), provides constitutional support for informational self-determination, requiring regulators to align AI practices with these principles.[27]
4.4 Intellectual Property Challenges
AI-generated content challenges traditional copyright law, which is predicated on human authorship. Questions include:
Who owns AI-generated music, art, or literature?
Should programmers, users, or AI systems themselves hold rights?
India’s Copyright Act, 1957 currently does not recognize AI as an author. Addressing this gap may require legislative amendments to allow co-authorship or ownership frameworks for AI-generated works while preserving human oversight.[28]
4.5 Deepfakes and Cybersecurity Risks
AI-generated media, including deepfakes, present unique challenges:
Misinformation and defamation risk, particularly during elections or public campaigns.
National security threats, as synthetic media can be weaponized to manipulate opinion or spread propaganda.Current provisions under Section 67 of the IT Act criminalize obscene digital content but do not specifically address deepfakes. Experts argue for dedicated legislation to regulate synthetic media responsibly.[29]
4.6 Employment and Labour Implications
Automation powered by AI threatens significant job displacement across sectors. Current labour and social security laws, including the Code on Social Security, 2020, do not specifically address technological unemployment. Policymakers must consider:
Retraining and upskilling programs for affected workers,
Compensation mechanisms for displaced employees, and
Inclusion of AI-specific clauses in labour legislation to ensure workers’ rights are protected.[30
Comparative International Approaches to AI Regulation
India can draw insights from global AI regulatory practices, which illustrate varying approaches to balancing innovation with accountability.
5.1 European Union – The AI Act (2024)
The EU AI Act is the first comprehensive statutory framework for AI globally. Key features include:
Risk-based categorization: AI systems are classified as unacceptable risk, high risk, limited risk, or minimal risk.
High-risk AI obligations: Transparency, human oversight, bias mitigation, and detailed documentation.
Prohibitions: Certain applications, such as social scoring or manipulative AI targeting vulnerable populations, are banned.[31]
This risk-based approach provides a model for India to regulate AI proportionally, focusing resources on systems with the highest societal impact.
5.2 United States – Sectoral Regulation and Self-Regulation
The U.S. employs a sectoral approach, where AI regulation is fragmented across agencies:
Federal Trade Commission (FTC): Oversees AI use in consumer protection and prevents deceptive practices.
Blueprint for an AI Bill of Rights (2022): Outlines principles for safe and equitable AI, including data privacy, algorithmic transparency, and human oversight.[32]
While encouraging innovation, this decentralized approach may lead to inconsistencies and gaps in enforcement.
5.3 China – State-Centric AI Governance
China emphasizes state control and content alignment with national interests. Key aspects include:
Generative AI Measures (2023): AI outputs must comply with socialist values and national security requirements.
Algorithmic transparency: Companies are required to disclose AI logic and decision-making frameworks to regulators.[33]
Although effective for centralized control, this model raises concerns over freedom of expression and human rights compliance, which contrasts with India’s democratic ethos.
5.4 Global Lessons for India
Comparative analysis highlights:
1.Risk-based regulation (EU) is effective for focusing oversight where it matters most.
2.Principle-based guidance (US) encourages innovation but may be fragmented.
3.Mandatory disclosure and government oversight (China) ensures control but limits civil liberties.
India must adopt a hybrid model that combines:
1.Risk-based regulation for high-impact AI systems
2.Ethical standards for all AI applications
3.Judicially anchored constitutional safeguards
Constitutional Safeguards
India’s Constitution provides a strong foundation for AI ethics:
Equality and Non-Discrimination (Articles 14 & 15): AI algorithms must not perpetuate social, economic, or caste-based biases. Algorithmic fairness should be mandated to prevent discrimination in employment, credit, or law enforcement. Freedom of Expression (Article 19): AI content moderation, including automated social media filtering, must respect free speech while balancing harms like misinformation or hate speech. Right to Privacy (Article 21): Following Justice K.S. Puttaswamy v. Union of India (2017), individuals retain control over their personal data, requiring informed consent, data minimization, and transparency in AI-driven decisions.[34]
6.2 Ethical Principles for AI
Globally recognized AI ethics frameworks provide guidance for India:
Transparency: Users should understand AI decision-making processes. Black-box systems without explainability must be restricted in high-risk domains like healthcare or judiciary. Accountability: Human oversight and audit mechanisms are essential. Organizations deploying AI must be answerable for errors or harms caused. Safety and Robustness: AI systems must be rigorously tested to prevent failures, cyber vulnerabilities, or harmful outcomes. Human-Centric Design: AI should augment human capabilities rather than replace human judgment, especially in sectors impacting life and liberty.
These principles ensure AI aligns with constitutional morality and broader social welfare.
Proposed Legal and Policy Reforms
To address current gaps, India requires a comprehensive legal framework encompassing statutory regulations, sectoral guidance, and ethical standards. Key recommendations include:
7.1 Dedicated AI Legislation
India should enact an Artificial Intelligence Regulation and Ethics Act, covering:
Definitions of AI systems and stakeholders. Liability and accountability frameworks for autonomous systems. Mandatory transparency and explainability standards. Mechanisms for algorithmic audits and impact assessments.
Such legislation would provide legal certainty, encourage innovation, and protect citizens’ rights.
7.2 Establishment of an AI Regulatory Authority
A centralized AI Regulatory Authority of India (AIRA) could oversee:
High-risk AI systems in healthcare, finance, governance, and public safety. Compliance with ethical, legal, and safety standards. Licensing and certification of AI applications prior to deployment.
The Authority could act as both a regulator and an advisory body, bridging policy and judicial oversight.
7.3 Algorithmic Impact Assessment (AIA)
Before deploying AI in sensitive areas, organizations should conduct AIAs to evaluate:
Potential discrimination or bias. Risks to privacy and fundamental rights. Safety and reliability of algorithms.
This concept mirrors environmental impact assessments (EIA) and ensures proactive mitigation of AI harms.[35]
7.4 Ethical Standards and Certification Development of AI ethical standards in collaboration with BIS and MeitY. Voluntary AI certification programs to encourage adherence to global best practices. Special focus on explainability, fairness, and human oversight.
These measures will build public trust and ensure ethical deployment of AI technologies.
7.5 Public Awareness and Digital Literacy
Citizens must understand:
Their rights regarding AI-driven decisions. Risks associated with data misuse. Mechanisms for redress in case of harm.
Educational campaigns and digital literacy initiatives will empower informed participation in AI governance.
7.6 International Cooperation
India should actively engage with international bodies such as OECD, G20, and UN to:
Harmonize AI standards. Facilitate global data exchange with privacy safeguards. Ensure Indian AI innovations meet international compliance requirements.
This will help India maintain global competitiveness while adhering to ethical norms.
Conclusion
Artificial Intelligence is no longer a futuristic concept—it is actively shaping governance, economy, and society. For India, the challenge lies in fostering innovation while safeguarding constitutional rights and ethical norms.
India’s current policy-driven approach, including the DPDP Act, 2023 and NITI Aayog’s AI strategy, provides a strong foundation. However, gaps remain in liability frameworks, algorithmic fairness, IP rights, and sectoral compliance. Drawing from international experiences, India must adopt a hybrid regulatory model:
Risk-based regulation for high-impact AI systems. Ethical standards and certification to ensure fairness, transparency, and accountability. Constitutionally anchored oversight to protect fundamental rights.
Ultimately, AI regulation in India must be human-centric, promoting innovation while preserving dignity, equality, and justice. By implementing robust laws, ethical frameworks, and public awareness initiatives, India can become a global leader in responsible AI, balancing technological growth with social welfare.
REFERENCES
1.NASSCOM, The AI Landscape in India: Growth Projections 2024–2025 (2024), https://www.nasscom.in. India Const. arts. 14, 15, 19, 21.
2.NITI Aayog, National Strategy for Artificial Intelligence: #AIforAll (2018), https://www.niti.gov.in. Digital Personal Data Protection Act, No. 25 of 2023, India.
3.Information Technology Act, No. 21 of 2000, §§ 43A, 72A. Ministry of Electronics & IT, IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, https://meity.gov.in.
4.Bureau of Indian Standards, AI Standardization Initiative Report (2023), https://www.bis.gov.in.
5.Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
6.Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
7.National Digital Health Mission, Framework for AI in Healthcare (2023), https://ndhm.gov.in.
8.Central Drugs Standard Control Organization, Regulation of AI-based Medical Devices in India (2022), https://cdsco.gov.in.
- Reserve Bank of India, Guidelines for AI in Banking and Finance (2022), https://rbi.org.in.
10.Securities and Exchange Board of India, AI in Capital Markets: Regulatory Framework (2023), https://sebi.gov.in.
11.Ministry of Housing and Urban Affairs, Smart Cities Mission: AI Applications (2021), https://smartcities.gov.in.
12.NITI Aayog, AI for Governance: Opportunities and Risks (2020), https://niti.gov.in.
13.Ministry of Electronics and IT, MeitY AI Guidelines for Public Sector Projects (2022), https://meity.gov.in.
- V. Srivastava, Liability in Autonomous AI Systems: An Indian Perspective, 12 Indian J. L. & Tech. 45 (2023).
16.R. Mehta, Algorithmic Bias and Indian Constitutional Law, 15 NLU J. L. & Ethics 89 (2022).
17.Copyright Act, 1957, No. 14 of 1957, India. S. Kumar, Regulating Deepfakes in India, 7 J. Cyber L. 23 (2021).
18.Code on Social Security, 2020, No. 36 of 2020, India. European Commission, Proposal for an AI Act (2024), https://eur-lex.europa.eu.
19.White House, Blueprint for an AI Bill of Rights (2022), https://www.whitehouse.gov.
- State Council of China, Generative AI Measures (2023), http://www.gov.cn.
21.S. Iyer, Algorithmic Impact Assessments in India: Legal and Ethical Considerations, 11 Indian J. L. & Tech. 101 (2022).
22.OECD, Recommendation on AI Ethics and Governance (2021), https://www.oecd.org.
23.V. Srivastava, AI Governance and Accountability in India, 13 J. Indian L. & Tech. 67 (2023).
24.Ministry of Science & Technology, AI Compliance Committees: Annual Report (2023), https://dst.gov.in.
25.MeitY, AI Regulatory Sandbox Proposal (2023), https://meity.gov.in. NITI Aayog, Responsible AI for Youth Program (2022), https://niti.gov.in.
[1] NASSCOM, The AI Landscape in India: Growth Projections 2024–2025 (2024), available at https://www.nasscom.in.
[2] Constitution of India, Arts. 14, 19, 21.
[3] NITI Aayog, National Strategy for Artificial Intelligence: #AIforAll (2018), https://www.niti.gov.in.
[4] Digital Personal Data Protection Act, No. 25 of 2023, India.
[5] Information Technology Act, No. 21 of 2000, §§ 43A, 72A.
[6] Ministry of Electronics & IT, IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
[7] Bureau of Indian Standards, AI Standardization Initiative Report (2023), https://www.bis.gov.in.
[8] Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[9] Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
[10] National Digital Health Mission, Framework for AI in Healthcare (2023), https://ndhm.gov.in.
[11]Central Drugs Standard Control Organization, Regulation of AI-based Medical Devices in India (2022), https://cdsco.gov.in.
[12] Reserve Bank of India, Guidelines for AI in Banking and Finance (2022), https://rbi.org.in.
[13]Securities and Exchange Board of India, AI in Capital Markets: Regulatory Framework (2023), https://sebi.gov.in.
[14] Ministry of Housing and Urban Affairs, Smart Cities Mission: AI Applications (2021), https://smartcities.gov.in.
[15] NITI Aayog, AI for Governance: Opportunities and Risks (2020), https://niti.gov.in.
[16] Ministry of Electronics and IT, MeitY AI Guidelines for Public Sector Projects (2022), https://meity.gov.in.
[17] Digital Personal Data Protection Act, No. 25 of 2023, India.
[18] K. S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[19] Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
[20] Bureau of Indian Standards, Draft AI Standards (2023), https://bis.gov.in.
[21] Ministry of Science & Technology, AI Compliance Committees: Annual Report (2023), https://dst.gov.in.
[22] NITI Aayog, Responsible AI for Youth Program (2022), https://niti.gov.in.
[23] MeitY, AI Regulatory Sandbox Proposal (2023), https://meity.gov.in.
[24] Ibid.
[25] V. Srivastava, Liability in Autonomous AI Systems: An Indian Perspective, 12 Indian J. L. & Tech. 45 (2023).
[26] R. Mehta, Algorithmic Bias and Indian Constitutional Law, 15 NLU J. L. & Ethics 89 (2022).
[27] Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[28] Copyright Act, 1957, No. 14 of 1957, India.
[29] Information Technology Act, No. 21 of 2000, § 67; S. Kumar, Regulating Deepfakes in India, 7 J. Cyber Law 23 (2021).
[30] Code on Social Security, 2020, No. 36 of 2020, India.
[31] European Commission, Proposal for an AI Act (2024), https://eur-lex.europa.eu.
[32] White House, Blueprint for an AI Bill of Rights (2022), https://www.whitehouse.gov.
[33] State Council of China, Generative AI Measures (2023), http://www.gov.cn.
[34] Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[35] S. Iyer, Algorithmic Impact Assessments in India: Legal and Ethical Considerations, 11 Indian J. L. & Tech. 101 (2022).




