Published On: April 24th 2026
Authored By: Abhishek Dubey
Lucknow Law College
Abstract
Artificial intelligence has moved from the realm of technological innovation to the centre of everyday decision-making. Algorithms now influence credit, hiring, policing, health care, content moderation, insurance underwriting, education, and judicial administration. Yet the law continues to rely upon assumptions that are increasingly strained by autonomous, adaptive, and opaque systems. When an AI system causes harm, traditional liability doctrines demand a human person, a visible breach, and a rational chain of causation. In many AI contexts, however, fault is dispersed across developers, deployers, data suppliers, platform operators, and end users. The result is an accountability deficit: harm is real, but responsibility is difficult to locate.
This article examines the Indian legal position and argues that the present framework remains fragmentary and insufficient. Tort law is too dependent on individual fault to deal with emergent machine behaviour; criminal law remains anchored in mens rea; and existing statutory regimes, including the Information Technology Act, 2000 and the Consumer Protection Act, 2019, are not designed for self-learning systems that may evolve after deployment. The Digital Personal Data Protection Act, 2023 improves data governance, but it does not constitute a comprehensive regime for AI accountability. Drawing on comparative developments, particularly risk-based regulation, the article proposes a hybrid model for India. Such a model would combine strict liability for high-risk AI, mandatory documentation and audits, explainability obligations, sector-specific regulation, and compulsory insurance for certain classes of deployment. The article contends that only a layered framework of this nature can reconcile innovation with justice, deterrence, and effective compensation.
Keywords: Artificial Intelligence; liability; tort law; criminal law; algorithmic accountability; Indian law; autonomous systems; digital regulation
I. Introduction
Artificial intelligence is no longer merely a software tool that executes a fixed command. In its most consequential forms, it is a system that learns from data, identifies patterns, adjusts its outputs, and may generate decisions that its human designers cannot fully predict. This shift has profound implications for legal theory. The classical architecture of liability in Indian law assumes a world in which actions are attributable to identifiable persons whose intent, negligence, or conduct can be assessed against a stable standard of care. AI unsettles that architecture. The more autonomous the system, the less convincing it becomes to say that a single person “did” the harmful act in any ordinary sense.
An additional reason for concern is the migration of AI into public decision-making. When algorithmic tools are used in welfare administration, policing, tax enforcement, or document verification, the harm is not merely economic but constitutional. An erroneous classification can affect livelihood, liberty, or access to benefits. In such settings, the state cannot hide behind vendor contracts or technical complexity. Public authorities remain responsible for ensuring that automated systems do not become instruments of arbitrariness. The requirements of fairness, reasoned decision-making, and non-discrimination therefore have to be translated into the design and procurement of AI systems themselves.
The practical consequences are already visible. AI systems are used to screen job applicants, filter loan applications, detect fraud, recommend sentences, flag online content, and classify individuals by risk profile. In many of these settings, an adverse decision may appear neutral because it is generated by a machine. Yet the apparent neutrality of code can conceal structural bias, training data defects, design errors, or commercial incentives that make harmful outcomes predictable in aggregate even if not traceable in every individual case. The law cannot remain content with the language of technological inevitability. It must ask whether current doctrines are capable of allocating responsibility fairly when harm is produced by systems that combine human design with machine autonomy.
The Indian legal system is particularly vulnerable to this challenge because it has not yet adopted a dedicated AI statute. Instead, AI-related disputes must be forced into existing categories such as negligence, defamation, consumer protection, data protection, cyber security, and intermediary liability. This piecemeal approach may work where the harm is conventional and the technology is a mere instrument. It becomes inadequate when the technology itself participates in the decision-making process. A facial recognition error, a discriminatory hiring filter, a harmful deepfake, or an autonomous vehicle accident does not fit neatly into any single doctrinal box. The law is left to improvise after the damage is done.
This article argues that improvisation is not enough. India requires a principled liability framework that recognises the distinctive features of AI: autonomy, opacity, adaptability, and scale. The core problem is not simply that AI can cause harm. Human technologies have always caused harm. The deeper difficulty is that AI can make the attribution of harm uncertain, thereby weakening deterrence and frustrating compensation. A legally adequate response must therefore combine ex post remedies with ex ante governance. The aim is not to halt innovation, but to ensure that innovation operates within a structure of responsibility.
II. Understanding AI and the Nature of Autonomous Decision-Making
The phrase “artificial intelligence” covers a broad family of systems. Some are rule-based and operate within tightly defined parameters. Others use machine learning techniques to infer patterns from large data sets. The latter category raises the most serious legal questions because the system’s output may not be a direct and transparent translation of human instructions. Instead, the machine constructs its own internal parameters through training. That process makes AI useful, but also difficult to regulate through traditional legal forms.
A central difficulty is the opacity often described as the “black box” problem. In many machine learning systems, the relationship between input data and output decision is technically complex and not easily interpretable. Even where engineers can identify broad design goals, they may not be able to explain why a specific recommendation or classification was generated in a particular instance. This opacity matters because legal responsibility depends upon explanation. Negligence requires proof of what a reasonable actor should have done. Product liability requires identification of a defect. Procedural fairness demands reasons. Where the system cannot meaningfully explain itself, law is forced to infer responsibility from surrounding circumstances.
Autonomy should also be understood in degrees rather than as an absolute. An AI system may be autonomous in the sense that it makes recommendations without continuous human intervention, while still remaining embedded within a human governance structure. That structure often includes developers who train the model, companies that deploy it, users who rely on it, and data suppliers whose information shapes the model’s behaviour. The fact that AI is not “free” in a metaphysical sense does not solve the legal problem. Responsibility still becomes fragmented, and fragmentation is precisely what makes accountability difficult to enforce.
The legal significance of AI is therefore not that machines have become persons. They have not. The significance lies in the fact that their operation disturbs the usual relationship between human intention and harmful consequence. In classical law, the actor and the harm are linked by a recognisable chain of choice and control. In AI systems, the chain may be indirect, distributed, or opaque. The law must decide whether to preserve old doctrines by stretching them beyond recognition or to reformulate liability in a manner that reflects technological reality.
III. The Existing Indian Legal Framework
Indian law contains several provisions that can be adapted to AI-related disputes, but none of them was designed with autonomous systems in mind. The resulting fit is partial and often awkward.
Tort law remains the most obvious starting point. A person injured by an AI-enabled product or service may attempt to plead negligence by showing that the developer or deployer owed a duty of care, breached that duty, and caused foreseeable harm. This approach is doctrinally familiar, yet it creates immediate difficulties. First, the chain of duty is unclear: is the duty owed by the programmer, the platform, the company that trained the model, the user who relied on the output, or all of them together? Second, the standard of breach is elusive because AI systems may fail in ways that no reasonable actor could fully predict. Third, causation becomes complicated when the harmful event arises from a combination of data bias, model drift, environmental inputs, and human reliance. Negligence is thus a poor fit for many AI harms because it presumes a human failure that can be individually identified and normatively condemned.
Strict liability offers a more promising analogy. The common law logic of Rylands v Fletcher[1] is that a person who introduces a dangerous thing into the world should bear responsibility for resulting harm even without proof of fault. This principle resonates with high-risk AI deployments. If an entity chooses to use an autonomous system in a context where failure may cause serious injury, it is not unreasonable to require that entity to bear the risk of that choice. Yet strict liability doctrine in India has historically been developed in relation to industrial and environmental hazards, not adaptive digital systems. The analogy is useful, but it is not enough on its own. AI systems differ from static hazardous objects because they change through use. The risk is not merely one of escape or malfunction; it is one of emergent behaviour.
Criminal law presents an even sharper obstacle. Most offences under Indian penal law still turn on intention, knowledge, recklessness, or at least a culpable mental state. AI has no consciousness, moral understanding, or guilty mind. It cannot form mens rea. This means that direct criminal liability for the machine is conceptually impossible. One might look to corporate criminal liability or to the liability of human supervisors, but those routes only work where a human decision-maker can be shown to have authorised, ignored, or recklessly enabled the harmful conduct. When the problem is not deliberate misconduct but systemic unpredictability, criminal law becomes an ill-suited instrument. It is designed to punish blameworthy conduct, not to manage technological risk in the abstract.
Statutory law is also fragmented. The Information Technology Act, 2000[2] provides the basic legal architecture for electronic records, cyber offences, intermediary obligations, and digital authentication. It is indispensable for online governance, but it does not contain a theory of autonomous algorithmic harm. The Consumer Protection Act, 2019[3] introduces product liability and consumer remedies, yet its structure still reflects the logic of goods, services, and defects in a relatively conventional sense. An AI system may not be defective in the ordinary sense at the point of sale; it may become harmful through deployment, interaction, or retraining. The Consumer Protection Act can assist where AI is marketed as a product or service, but its remedies are not a complete answer.
The Digital Personal Data Protection Act, 2023[4] is an important addition to the regulatory landscape because it recognises the need to protect digital personal data while permitting lawful processing. It gives data principals rights and imposes obligations on data fiduciaries. That said, it is primarily a data governance statute. It addresses consent, notice, correction, erasure, and fiduciary duties, but it does not create a comprehensive framework for AI accountability. Data protection is only one dimension of AI harm. A model may be lawful in relation to data handling and still produce discriminatory, unsafe, or misleading outputs. The DPDP Act therefore complements, but does not replace, AI-specific regulation.
At the constitutional level, several rights are relevant. Article 14 is implicated when algorithmic systems encode bias or reinforce arbitrary classification. Article 21 is engaged where AI affects privacy, dignity, or decisional autonomy, as affirmed in Justice K.S. Puttaswamy v Union of India.[5] Article 19 concerns arise when AI shapes speech environments through moderation, ranking, recommendation, and automated content suppression. These rights-based concerns underscore that AI is not simply a commercial efficiency tool; it is a governance technology with constitutional consequences. Yet constitutional values alone do not solve the problem of private-law compensation. Rights require remedies, and remedies require liability rules.
IV. Why Conventional Liability Fails in the AI Context
The inadequacy of existing law lies not only in doctrinal mismatch but in the structure of AI itself. Three features are especially important: diffusion of responsibility, unpredictability, and scale.
1. Diffusion of Responsibility: In a traditional negligence case, there is usually a clear actor whose conduct can be measured against a standard. In AI, however, the final output may reflect choices made by multiple parties at different stages. The developer may have selected the model architecture, the deployer may have chosen the use case, the data provider may have supplied skewed training data, and the end user may have relied blindly on the output. This makes it tempting for each participant to deny primary responsibility by pointing to another. Such fragmentation creates a moral and legal gap in which everyone is partially involved but no one is fully liable.
2. Unpredictability: AI systems are often unpredictable even when designed responsibly. A reasonable developer may know that a model performs well on average but still not be able to anticipate its behaviour in edge cases. This is not merely a technical inconvenience. It goes to the heart of legal causation and foreseeability. If harm is not foreseeable in an ordinary legal sense, negligence becomes difficult to prove. But if liability is denied whenever a system behaves in an emergent way, victims are left without redress precisely when they need it most. The law must therefore rethink foreseeability for adaptive technologies. A risk may be foreseeable at the level of system design even if the precise incident is not.
3. Scale: AI systems can affect thousands or millions of people at once. A biased model used in recruitment, credit assessment, or public administration may create widespread structural harm while appearing to affect only individual users one by one. Conventional litigation is poorly suited to this phenomenon. Individual suits are expensive and slow; collective harm is diffuse; and many victims may never know that an algorithm was responsible. A liability model that depends entirely on individual complaint and proof will under-deter large-scale misuse.
The black box problem intensifies these concerns. Where reasons are unavailable, victims cannot easily show how the decision was made. This creates evidentiary asymmetry: the institution controlling the model knows far more than the affected person. In ordinary private law, such asymmetry can sometimes be managed through disclosure, adverse inference, or burden-shifting. In AI disputes, however, the information gap may be too large unless the law explicitly requires documentation, traceability, and auditability. Without those requirements, liability becomes theoretical rather than practical.
V. Comparative Approaches
Comparative law offers useful guidance, particularly from jurisdictions that have moved toward risk-based regulation. The European Union has developed the most influential model. Its approach is premised on the idea that not all AI systems pose the same level of danger and that legal obligations should correspond to the level of risk. High-risk systems are subject to tighter controls, including governance duties, human oversight, documentation, transparency, and post-market monitoring. The value of this model lies in its preventive orientation. It shifts the regulatory focus from compensating damage after the fact to reducing the probability of harm before deployment.
A risk-based model is attractive because it avoids overregulation of low-risk innovation while imposing stricter burdens on systems that can affect rights or safety. This is particularly important for sectors such as employment, health, transport, and public administration. India can learn from this calibrated method. A blanket prohibition would be unrealistic; a laissez-faire approach would be irresponsible. The middle path is to distinguish between low-risk consumer applications and high-risk systems that require enhanced governance.
The United States has historically taken a more sectoral and fragmented approach, relying on a mixture of agency oversight, consumer protection, tort law, and voluntary standards. That model offers flexibility, but it can also produce regulatory gaps. One lesson from the American experience is that AI governance cannot be left solely to generic legal doctrines. Specific sectors demand specific rules. A credit-scoring model and an autonomous surgical tool should not be regulated in the same way.
India, meanwhile, has begun to emphasise AI as an innovation priority through policy initiatives and mission-oriented public investment. That is important because the state should not be perceived as hostile to technological development. But a development-first policy must be matched by accountability architecture. Innovation without liability creates externalities that are borne by the public rather than by the entity creating the risk. The comparative lesson is therefore not that India must import foreign law wholesale, but that it should combine developmental ambition with enforceable safeguards.
VI. Towards a Hybrid Liability Framework for India
A serious AI liability regime for India should be layered rather than monolithic. Different risks require different legal responses.
1. Strict Liability for High-Risk Deployments: Where AI is used in domains that directly affect life, bodily integrity, liberty, or access to essential services, the deployer should bear a heightened burden. This does not mean that every AI error results in liability. It means that the law should prioritise compensation and deterrence where the consequences of failure are grave. In such contexts, the entity choosing to deploy the system is best placed to internalise the risk through insurance, contractual allocation, and technical safeguards. Strict liability is justified not because the deployer is morally guilty in every case, but because the enterprise is the one that benefits from the deployment and is most capable of managing the risk.
2. Documentation and Audit Duty: AI systems used in high-stakes settings should be required to maintain records of training data sources, model updates, testing protocols, performance benchmarks, and human override procedures. Independent audits should assess bias, robustness, and explainability. The goal is not to burden every algorithm with bureaucratic ritual. The goal is to create traceability so that courts, regulators, and affected persons can determine what happened and why. Without documentation, legal accountability remains speculative.
3. Explainability Obligations: Full transparency may be technically impossible in some machine learning systems, but meaningful explanation is still possible in many contexts. At a minimum, affected persons should receive reasons that are intelligible enough to permit challenge. This is particularly important where AI is used in administrative or quasi-administrative decisions. A person denied employment, credit, insurance, or a welfare benefit should not be left with a cryptic machine output as the sole explanation for a consequential decision. Explainability is not only a technical ideal; it is a procedural justice requirement.
4. Compulsory Insurance: If AI deployment creates socially useful but non-trivial risks, compulsory or strongly incentivised insurance can ensure that victims are compensated promptly without requiring them to litigate the complexity of technical fault. Insurance also encourages risk pricing. When insurers demand better controls, the market itself begins to reward safer design. This is especially important in a developing economy where innovation is valuable but legal recourse may be slow.
5. Institutional Regulation: India may require a dedicated or specialised AI regulatory mechanism with sectoral coordination. Such a body could classify risk, issue codes of practice, supervise audits, and coordinate with data protection, consumer protection, telecom, financial, and competition authorities. A single omnipotent regulator may not be necessary, but a fragmented vacuum would be worse. The law needs institutional capacity, not merely abstract standards.
6. Remedial Framework for Collective Harm: Where AI systems affect large groups in similar ways, class-style procedures, representative actions, or regulator-led enforcement may be more effective than individual litigation. This would reduce transaction costs and make rights practically enforceable. Without collective mechanisms, many victims will remain unremedied because individual claims are too small to justify litigation, even though the aggregate harm is substantial.
Finally, the law should preserve space for human judgment. The purpose of AI regulation is not to replace decision-makers with automated command. Human oversight must remain real and meaningful, especially in matters involving dignity, liberty, and equality. A system that merely rubber-stamps machine outputs cannot claim to be a human-in-the-loop regime. The law should insist that final responsibility in high-impact domains remains humanly accountable.
VII. Conclusion
Artificial intelligence challenges the deepest assumptions of liability theory. It fragments responsibility, complicates causation, and stretches the evidentiary and normative capacity of conventional doctrines. India’s current legal framework, though capable of addressing some incidental harms, is not yet adequate to the scale and novelty of autonomous decision-making. Tort law is too fault-centric, criminal law too intent-centred, and existing statutes too fragmented to serve as a coherent regime for AI accountability.
The appropriate response is not panic and not passivity. It is legal design. India should build a layered framework that combines strict liability for high-risk uses, audit and documentation duties, explainability requirements, insurance-based compensation, and institutionally anchored oversight. Such a framework would not suppress innovation; rather, it would stabilise it by ensuring that the costs of risk are borne by those who create and profit from it, rather than by unsuspecting individuals harmed by opaque systems.
The law’s task in the age of AI is not to romanticise technology, nor to demonise it, but to civilise it. An autonomous system may be efficient, but efficiency without accountability is not progress. If AI is to become a legitimate instrument of governance and commerce in India, it must be governed by a legal order that understands both its promise and its peril.
References
[1] Rylands v Fletcher (1868) LR 3 HL 330.
[2] Information Technology Act, No. 21 of 2000, INDIA CODE (2000).
[3] Consumer Protection Act, No. 35 of 2019, INDIA CODE (2019).
[4] Digital Personal Data Protection Act, No. 22 of 2023, INDIA CODE (2023).
[5] Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1.
[6] Donoghue v Stevenson [1932] AC 562.




