Published on 05th June 2025
Authored By: Agalya Ajith
Tamil Nadu Dr. Ambedkar Law University
Introduction
The rapid advancement of Artificial Intelligence (AI) and Machine Learning (ML) has transformed the technological landscape, creating new paradigms across sectors. These technologies offer numerous benefits but also raise profound legal, ethical, and regulatory concerns that current legal systems are ill-equipped to address. As AI systems become increasingly autonomous and are deployed in decision-making processes affecting individuals’ rights, the absence of comprehensive legal frameworks becomes critical. Questions surrounding accountability, data protection, intellectual property, and algorithmic transparency remain unresolved across most jurisdictions.
This article examines the legal challenges posed by AI and ML, with a focus on comparative perspectives from India, the European Union, and the United States. It identifies gaps in existing regulatory approaches and proposes a framework for responsible innovation, striking a balance between technological progress and fundamental rights to contribute to a coherent legal framework for the era of intelligent machines.
Legal Definition and Classification of AI/ML
Defining and classifying Artificial Intelligence (AI) and Machine Learning (ML) remains a significant legal challenge due to their evolving nature and diverse applications. Currently, no universally accepted legal definition exists, leading to interpretative inconsistencies across jurisdictions.
The European Union’s proposed AI Act defines AI broadly as software developed using machine learning techniques that can generate predictions, recommendations, or decisions. This inclusive approach, while practical, raises concerns about enforceability and overregulation. In the United States, AI is typically addressed through sector-specific laws and is understood more by its functions, such as automation or pattern recognition, than by a unified legal standard.
In India, regulatory efforts are still nascent. NITI Aayog’s 2018 National Strategy for Artificial Intelligence outlines a vision for ethical AI deployment in sectors like healthcare and education, yet India lacks a formal legal definition or binding classification framework. This absence creates challenges for oversight, enforcement, and harmonisation with global standards.
A key aspect of classification lies in the distinction between Narrow AI (task-specific systems like chatbots) and General AI (hypothetical systems with human-like cognition). Most legal frameworks today focus on regulating Narrow AI, though the potential emergence of General AI calls for anticipatory regulatory strategies.
Another challenge is assessing AI systems based on risk, autonomy, and decision-making capability. The European Union’s risk-based model under its AI Act is a step forward, categorising AI systems into tiers of regulatory scrutiny. This approach could guide other jurisdictions, including India, in developing tailored and adaptive legal definitions that reflect technological realities.
Data Protection and Privacy Concerns
A pressing legal issue surrounding AI and ML is the protection of personal data and individual privacy. AI systems rely on vast amounts of data, often including sensitive personal information. As AI applications expand, concerns about data privacy and misuse have become urgent. The use of AI for data-driven decision-making raises questions about data ownership, collection, and usage.
In the European Union, the General Data Protection Regulation (GDPR) offers individuals greater control over their personal data. AI systems processing personal data must adhere to principles like data minimisation, purpose limitation, and informed consent, granting individuals rights to access, rectify, and erase their data. However, concerns remain about AI’s potentially opaque data processing and its impact on consumer protection.
In India, the Personal Data Protection Bill, 2019 (PDPB), modelled after the GDPR, emphasizes consent-based data collection and individual control over data. Critics argue it may not fully address AI-specific challenges like automated decision-making and profiling. AI systems making significant decisions (e.g., in credit scoring) lack transparency, potentially violating privacy rights. The bill also does not address AI-driven profiling.
AI systems also risk re-identification of anonymized data by correlating datasets with public information, posing risks, especially with sensitive data. The role of informed consent in AI-driven data processing is complicated by the volume of data and algorithm complexity, potentially undermining the effectiveness of consent.
For ethical and lawful AI, governments and corporations must develop clear, enforceable data privacy policies, including mandatory data protection impact assessments (DPIAs) for high-risk AI applications. AI systems should be designed with privacy by default and by design, mitigating privacy risks at the development stage.
Liability and Accountability in AI Systems
As AI systems gain greater autonomy, determining who should be held legally accountable for their actions becomes increasingly challenging. Traditional legal frameworks are based on human agency. AI blurs this by introducing a non-human agent, raising questions about responsibility when AI causes harm.
One major legal challenge is the lack of a consistent framework for AI liability. For example, in autonomous vehicle accidents, fault may depend on software glitches, design flaws, or human override failure. Existing product liability laws may not adequately address such scenarios, especially with the “black box” nature of some machine learning models.
A real-world example is the 2018 death of a pedestrian in Arizona involving an Uber self-driving car. Although a human driver was present, the vehicle was in autonomous mode. While the driver was charged, Uber was not held criminally liable.[1] This raised serious concerns about the extent to which companies deploying autonomous systems can evade responsibility under current legal structures. Similarly, Tesla Inc. has faced multiple lawsuits following fatal crashes involving its “Autopilot” feature. In these cases, courts have had to consider whether the manufacturer should be liable for system failures or whether drivers bore ultimate responsibility for not maintaining control.[2]
Proposed models include strict liability, holding manufacturers/operators liable regardless of fault, and risk-based liability, tailoring accountability to the AI system’s potential risk. High-risk applications would face stricter obligations. Another concept is “electronic personhood” for highly autonomous AI, creating a legal identity for accountability, though this remains controversial.
The European Parliament has recommended a comprehensive AI liability regime, including mandatory insurance for high-risk AI and obligations for transparency and human oversight. In India, no specific AI liability framework exists. Civil liability would likely be pursued under tort or consumer protection laws, which may not address causation or autonomous machine behaviour. Concerns have also been raised regarding the use of facial recognition technology by law enforcement agencies, such as the Delhi Police, without clear statutory guidelines.[3] While no major case law has emerged on AI liability in India, these developments underscore the urgent need for legal clarity on accountability in AI deployment.
A robust legal framework must clarify accountability across the AI lifecycle and ensure effective remedies for harm, requiring legislative reform and interdisciplinary collaboration.
Intellectual Property Rights and AI
AI is reshaping the intellectual property (IP) landscape, challenging frameworks designed for human creators. As AI systems create content and inventions without direct human input, the question of ownership arises.
- Authorship and Ownership
Traditionally, IP law, whether under copyright or patent regimes, attributes authorship or inventorship to a natural person. However, when an AI system independently generates content, such as a digital artwork or musical composition, determining authorship becomes problematic.
Studio Ghibli has objected to AI-generated fan art using its characters, citing copyright concerns and the replication of its distinct style, which raises questions about infringement and the need for stronger creator protections.[4]
In India, the Copyright Act 1957 defines an “author” as the person who causes the creation of the work.[5] It does not contemplate non-human creators, and courts have not yet addressed whether works generated autonomously by AI can enjoy copyright protection. Internationally, jurisdictions vary in their stance. For instance, the UK Copyright, Designs and Patents Act 1988 allows copyright in computer-generated works to be held by the person “by whom the arrangements necessary for the creation of the work are undertaken.”[6] However, this still assumes a human intermediary is responsible.
The US Copyright Office, in recent decisions (e.g., Zarya of the Dawn, 2022), has denied copyright registration to AI-generated content unless it demonstrates meaningful human authorship.[7] This highlights the global inconsistency in dealing with machine-generated works and underscores the need for harmonised legal standards.
- Patent Law and Inventive AI
The issue of AI inventorship in patent law is equally contentious. The DABUS cases sought recognition of an AI system as the inventor.[8] While the EPO, UKIPO, and USPTO rejected these applications, the South African Patent Office granted a patent listing an AI as the inventor. This divergence highlights the need for legislative clarity. Granting patents to AI may incentivize innovation, but raises concerns about enforceability and the definition of inventive step by a non-human agent.
- Need for Reform
The growing use of AI in creative and inventive processes necessitates reform in IP laws. A possible solution is attributing rights to the developer, deployer, or user based on their control or contribution. Some experts argue for a sui generis regime for AI-generated content. As AI blurs the line between creator and tool, the legal system must evolve to ensure fair recognition, prevent misuse (e.g., deepfakes), and support innovation.
Privacy and Data Protection
Artificial Intelligence and Machine Learning systems are heavily reliant on large datasets, often containing personal or sensitive information. This raises significant concerns regarding the collection, storage, processing, and sharing of data, especially when individuals are unaware of or unable to control how their data is used.
- AI and Intrusive Data Practices
AI systems, particularly those used in facial recognition, predictive policing, health diagnostics, and targeted advertising, can potentially infringe on the right to privacy, which is recognised as a fundamental right under Article 21 of the Indian Constitution.[9] Such systems frequently gather data without informed consent, and the use of automated profiling can lead to discrimination, surveillance, or misuse.
In India, the Digital Personal Data Protection Act, 2023 (DPDP Act), provides a framework for safeguarding personal data, introducing principles of consent and data minimisation.[10] However, it has been criticised for broad state exemptions and weak enforcement.
To safeguard privacy, AI and ML systems should be designed with stringent access control mechanisms to limit the usage of data without proper permission. Several design principles can be implemented to ensure that these systems operate within the confines of data protection laws:
- Access Control and Permissions: AI systems need strong user authentication (e.g., two-factor) and role-based access control (RBAC) to limit access to sensitive data, ensuring only authorised users can interact and preventing misuse.
- Granular Permissioning: AI must allow individuals to explicitly consent to specific data access and usage purposes, giving data subjects control over their information, especially in sensitive sectors like healthcare and finance.
- Data Minimization: AI should be designed to collect only essential data for its specific purpose, following the data protection principle of minimization to reduce the risk of unnecessary exposure.
- Anonymization and Pseudonymization: AI systems can use anonymization or pseudonymization (removing or replacing personal identifiers) to reduce the risk of data misuse if compromised, while still enabling useful analysis.
- International Benchmarks
Globally, the GDPR serves as a gold standard in privacy protection, with stringent rules on data processing and the right to explanation under Article 22 regarding automated decision-making.[11] Countries like Canada and the UK have similar regulations. India’s evolving framework could learn from these models.
- Challenges in Enforcement
Assigning liability for data breaches or algorithmic bias is a primary legal challenge. It is often unclear who is accountable when AI violates privacy. Black box algorithms complicate establishing wrongful conduct. Function creep, where data collected for one purpose is repurposed without consent, is also a risk.
- Towards a Rights-Based Approach
To effectively address these concerns, a rights-based and transparent AI governance model is required. Measures such as algorithmic audits, independent data protection authorities, transparency reports, and ethical AI guidelines should be institutionalised. Public participation in policy-making and regular impact assessments are also essential to protect the citizens’ rights in an AI-driven society.
Regulatory Framework and Global Approaches
The legal challenges of AI and ML have led to diverse regulatory responses. Some countries are developing AI-specific legislation, while others rely on sectoral guidelines or data protection laws. This fragmented approach highlights the need for comprehensive and harmonised frameworks.
- India’s Policy Landscape
India currently lacks a dedicated AI law. NITI Aayog’s 2018 #AIForAll strategy identifies key sectors for AI implementation and emphasizes responsible AI, but it is non-binding.[12] This discussion paper identifies five key sectors—healthcare, agriculture, education, smart cities, and mobility—as priority areas for AI implementation and emphasises the need for responsible AI.
Other regulations like the DPDP Act, 2023, and sectoral guidelines cover data protection and algorithmic transparency to some extent, but fall short of a unified AI regulatory code. The Justice B.N. Srikrishna Committee Report (2018) also highlighted the importance of transparency in automated decision-making.[13]
- The European Union: A Rights-Focused Model
The EU’s AI Act adopts a risk-based approach, classifying AI systems into risk categories with corresponding obligations, including conformity assessments and human oversight for high-risk applications.[14] . It complements existing EU laws like the GDPR, aligning AI development with democratic values.
- Other International Models
Countries like Canada, Singapore, and Japan have adopted soft-law frameworks, such as ethical guidelines, voluntary standards, and sandboxes for AI innovation. The OECD Principles on AI (2019)—endorsed by over 40 countries—emphasise inclusive growth, transparency, robustness, and accountability.[15] The 2021 UNESCO Recommendation on the Ethics of AI serves as a global framework for guiding ethical AI governance, with particular relevance for developing nations.[16]
These models reflect a growing consensus that multilateral cooperation is essential to regulate cross-border applications of AI, such as autonomous weapons, digital surveillance, and international data transfers.[17]
- Need for a Unified Indian Approach
Given AI adoption in India, a dedicated AI legislation is imperative to define AI, outline liability, ensure fairness and transparency, and establish regulatory oversight bodies. India can learn from the EU’s model while tailoring laws to its socio-economic realities.
Conclusion and Recommendations
AI and ML present immense opportunities and significant legal challenges concerning liability, IP, privacy, and data protection. Current regulatory frameworks, including India’s, offer partial solutions but lack the necessary coherence and flexibility.
- The Need for a Comprehensive Legal Framework
A robust legal framework is essential for the responsible development and deployment of AI and ML, addressing technical aspects while safeguarding human rights and ethics. It should balance innovation with protection against risks.
- Key Recommendations:
- Establish Clear Definitions and Classifications: Laws should clearly define AI systems based on autonomy and risk for targeted regulation.
- Develop AI-Specific Legislation: India should consider specific AI laws focusing on liability, accountability, and ethical use.[18]
- Promote Transparency and Accountability: AI systems should be transparent with clear documentation and oversight, especially in high-risk applications.
- Implement Independent Oversight Bodies: Establish regulatory bodies with technical expertise and enforcement powers.
- International Cooperation and Harmonisation: Collaborate internationally on AI regulation to prevent human rights infringements.[19]
- Public Participation and Ethical AI Guidelines: Ensure inclusive AI governance with input from diverse stakeholders and promote ethical guidelines.
- Towards a Balanced Future
The future of AI requires balancing technological progress with human well-being through a legal framework prioritising ethics, accountability, and transparency. Collaboration between governments, industry, and civil society is crucial for a future where AI benefits all.
References
[1]National Transportation Safety Board, Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018 (Highway Accident Report NTSB/HAR-19/03, 2019) https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf accessed 15 April 2025.
[2] See e.g. Huang v Tesla Inc, US Dist Ct, California (2019); and other related cases filed in federal and state courts concerning “Autopilot” malfunctions.
[3] Internet Freedom Foundation, Project Panoptic: Tracking Facial Recognition Technology in India (2021) https://panoptic.in accessed 15 April 2025.
[4]Aja Romano, ‘Legendary animator Hayao Miyazaki reacts to AI-generated animation: “An insult to life itself”’ (Vox, 11 December 2016) https://www.vox.com/2016/12/11/13908296/hayao-miyazaki-artificial-intelligence-viral-video accessed 15 April 2025.
[5] Copyright Act 1957, s 2(d).
[6] Copyright, Designs and Patents Act 1988, s 9(3)
[7] US Copyright Office, ‘Zarya of the Dawn’ (2022) Decision Letter, denying registration for AI-generated content https://www.copyright.gov accessed 15 April 2025.
[8] European Patent Office, Decision J 0008/20 (Designation of inventor/DABUS) (21 December 2021) https://www.epo.org/en/boards-of-appeal/decisions/j200008eu1 accessed 15 April 2025.
[9] Constitution of India 1950, art 21.
[10] Digital Personal Data Protection Act 2023, No. 22 of 2023, s 4.
[11] ibid, art 22.
[12] NITI Aayog, ‘National Strategy for Artificial Intelligence #AIForAll’ (2018) https://niti.gov.in/sites/default/files/2021-10/NationalStrategy-for-AI-Discussion-Paper.pdf accessed 15 April 2025.
[13] Justice B.N. Srikrishna Committee, A Free and Fair Digital Economy: Protecting Privacy, Empowering Indians (Ministry of Electronics and Information Technology, 2018) https://www.thehinducentre.com/resources/article24561713.ece accessed 15 April 2025.
[14] European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (AI Act)’ COM(2021) 206 final.
[15] OECD, ‘OECD Principles on Artificial Intelligence’ (2019) https://www.oecd.org/going-digital/ai/principles/ accessed 15 April 2025.
[16] UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (2021) https://unesdoc.unesco.org/ark:/48223/pf0000375701 accessed 15 April 2025.
[17] Rajesh T., ‘The Global Challenge of Regulating AI: A Multilateral Approach’ (2021) 52 International Journal of AI Policy 31, 33.
[18] Digital Personal Data Protection Act 2023 (India).
[19] OECD, OECD Principles on Artificial Intelligence (2019) https://www.oecd.org/going-digital/ai/principles/ accessed 15 April 2025; UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021) https://unesdoc.unesco.org/ark:/48223/pf0000381137 accessed 15 April 2025.