HOW THE BNS COMPELS A FUNDAMENTAL RE-EVALUATION OF CRIMINAL ACT, INTENT AND IMPUTATION OF LIABILITY TO CORPORATIONS WHEN THE PERPETRATOR IS AN ALGORITHM

Published On: December 3rd 2025

Authored By: Akshat Pugalia
National Law University, Odisha

INTRODUCTION

The rise of artificial intelligence (AI) has reshaped the operations of individuals, companies, and states, and almost all domains have been impacted. Over the last few years, AI systems have started to make decisions and take actions, which was only done by humans previously. The opportunities related to that type of technological shift are tremendous and it raises an interesting legal dilemma, including questions in criminal law. One of the biggest legal concerns is understanding how the law will respond when AI systems are used to inflict harm or violate the law[1]. In India, there has been growing interest and now the Bharatiya Nyaya Sanhita, 2023, which amends the previous Indian Penal Code.

ActusReus and men’s rea are considered to be the two basic components of criminal law. ActusReus means that a wrong act of doing something has occurred whereas mens rea means guilty mind / understanding of doing an act. These notions are effective in instances where the accused is a human being.[2] Nevertheless, it is not simple to identify a guilty mind or intent in the event that an independent machine or algorithm arrives at a decision, which has caused harm.[3] Is it possible that an AI has a criminal mind just like a person? Or shall it be on the programmer, the user, or the company which created the AI? This is the question that is now being compelled to be answered by lawmakers and even courts worldwide, particularly in India.

It is even harder when AI systems are applied in such areas as healthcare, transportation, and financial services, where their mistakes can actually harm people and society.[4] The law typically up to now, accuses criminal activities on the human being who designs, operates or controls these systems. However, with the evolution of the AI into something more advanced and self-sufficient, the practice of AI-assisted suicide is also under trial. No single and accepted viable solution is in existence yet, with a few models being floated; whether or not and when the criminal liability should rest on the people interacting with the AI, or even the AI apparatus itself.

The case of AI and algorithms represents a potential opportunity whereby India has a chance to reconsider the application of its criminal law by the Bharatiya Nyaya Sanhita, 2023. The problem is to establish rules that will keep abreast with the technology and at the same time safeguard people rights and safety. The purpose of the proposed research project is to explore the the Indian law application under the new code to the principal concepts of Actus reus, mens rea, and corporate attribution in cases where the algorithms are involved in the crime. It also examines the gaps that exist, potential solutions and what the future can bring as India tries to strike a balance between progress at the expense of justice and justice. 

CASE LAW 

Standard Standard Chartered Bank v. Directorate of Enforcement (2005)

Facts of the Case: A government agency called Directorate of Enforcement (ED) which monitors compliance with economic laws accused Standard Chartered Bank of infractions in the Foreign Exchange Management Act (FEMA). The bank was found to have provided illegal forex dealings and this was against the foreign exchange laws of India. Such dealings were allegedly carried out via automated banking systems that were in operation by the bank.

Facts about the Parties: Standard Chartered Bank is a transnational financial services and banking group that runs their operation in India as well as in other parts of the world. Directorate of Enforcement is a government agency in charge of conducting investigations and prosecution that involves breach of the foreign exchange and other related economic crimes.

How the Issue Arose: It was based on suspicions of the foreign exchange transactions that the ED launched action against Standard Chartered Bank. According to the bank, such transactions were the outcome of internal control failures and automation systems errors as opposed to intentional malpractice of a single employee or an officer. The main law issue which needed resolution was that the bank as an organization could be criminally liable to the actions of their automated system and that the mens rea (criminal intent) could be impose on the bank even in the absence of a particular culpable mind.[5]

Key Judgments: The Indian Supreme Court scrutinized the concept of corporate criminal liability, and specifically, whether evidencing a mens rea in corporate conduct was possible. The court related a mens rea of a corporation to the intentions and knowledge of a “controlling mind” or senior officer, who acted as the directing will of the corporation. While the unauthorized act may have been due to automated processes, a company could still be proven to be liable in such case a senior officer had knowledge or control of the act. The court concluded that corporations could not escape liability simply because they automated the action, instead they were required to show they exercised reasonable controls and reasonable due diligence.[6]

Dissenting Opinion: The case presented an arguable point on whether foreign corporations should have their scope of mens rea extended due to the presence of a human intent that is indistinct or absent. The dissenters claimed that the judgment that gives the company criminal liability against the conduct of the automated systems is also incompatible with the fact the company cannot really bear criminal liability when it is claimed that the driver of the wheel is not indeed a guilty mind. The dissent realized the necessity of being judicious on corporate responsibility taking when automated actions are undertaken by technologies and that by pressuring more legal applicable laws, explulent punishment should not be perpetrated against corporations.[7]

As evidenced in the case, the straightforward application of concepts on dastardly criminal law such as mens rea and Actusreus and applying them to new age technologies such as automated systems and algorithms are challenging. It brings out the disposition to change the law to deal with criminal responsibility in this new climate of AI and autonomous decision making systems.

CASE LAW ANALYSIS:

The v. Standard Chartered Bank. As seen in case Directorate of Enforcement (2005) case, it is difficult to attribute criminal liability to a corporation where the results of the action are caused by automated systems. The Supreme Court used the doctrine of the controlling mind because the court believed that the mens rea could be attributed to a company with the help of senior officers who can dictate the will of the company. This decision confirms that the responsibility of organizations to act upon the need to avoid liability is not an exemption when the offense is committed through automation. Nevertheless, this case also shows how the traditional concepts of mens rea fail when it comes to autonomous technologies. The dissent opinion warned that without a clear intent, corporate liability should not be wide and requires clarity in the policies legislating. This case illustrates the current immediate demand that law regimes should adjust to the increased influence of AI and algorithms in the decision-making process to preserve accountability without inappropriately punishing organizations by punishing those actions they cannot directly control as humans do. It provides a preliminary precedent when it comes to the debate over the algorithmic criminal liability as indicated in the Bharatiya Nyaya Sanhita, 2023[8].

ANALYSIS

The increasing popularity of artificial intelligence (AI) and algorithm-based systems in daily life has brought unprecedented issues to the criminal law systems of the past. In India, it is a high time to reinvent the basic laws of legal practice: Actus reus (guilty act), mens rea (guilty mind) and corporate attribution in perspective of algorithmic criminal liability with the introduction of the Bharatiya Nyaya Sanhita (BNS), 2023. However, the BNS, deeply rooted in age-old legal customs, shows the structural voids that need to be resolved to effectively supervise independent technologies and offer justice and accountability.

Criminal liability is based on the comparison of the wrongful act with a willful act and the undesired. The BNS still holds these doctrines as the basis of criminal liability. Yet, AI and algorithms disturb this dualism, as their activities are not an act of human beings, nor are they guided by will in the traditional sense. Since an algorithm is all code and is therefore autonomous, it does not have volitional or conscious intent sufficient to be considered mens rea. This distinction presents problems when it comes to allocating liability for the harms inflicted by the actions of AI, such as automated discrimination, financial manipulation, or mishaps involving autonomous vehicles.

Under BNS, corporate criminal liability and attribution principles share the same premise, namely, the application of the identification principle, according to which a liability of the corporation is attributed in accordance with the knowledge and intent of the so-called controlling mind of the corporation, which in most cases is senior management. Although this doctrine works well in human controlled businesses, it collapses when autonomous algorithms are used to make business decisions with little or no human intervention. Since AI systems become increasingly self-educating and capable of development and adjustment beyond the programming, the boundaries of control, predictability, and accountability been unacknowledged. Thus, the existing framework is not able to solve the complexities of algorithmic decision-making, and victims and regulators remain in a grey territory[9].

Potential Sources of Gaps in Existing Framework of BNS

  • Mens rea and Algorithmic Actions: The BNS does not allow taking into consideration that algorithms cannot bear the mens rea and that criminal responsibility should not be based only on conventional notions of intent in the case of AI. The fact that they lack specific standards of the mental elements of automated acts is a significant gap.[10]
  • Actus reus and Causation: The current laws assume an act by a natural or a legal person. This is challenged by the autonomous performance of algorithms which results in the formation of acts that are not directly controlled by humans. The model does not take a conceptualization of how such acts may be defined and attributed legally.
  • Corporate Attribution and Control: The so-called controlling mind doctrine presupposes the obvious human supervision and control. This assumption is put to the test by self-directed AI systems that do not need quite a lot of human operator involvement, and it becomes hard to implicate corporate entities.
  • Absence of AI-Specific Liability Norms: The system does not have a scheme of strict liability, vicarious actions, or risk-based responsibility; this could be suitable due to the special character of harm caused by algorithms.
  • Lack of Transparency and Auditability Requirements: The BNS lacks an indication of algorithmic transparency, and audit trails, required to implement accountability in AI-based harm[11].

Suggestions to Seal the Loopholes.

  • Redefine the concept of Mens Rea in Algorithms: The legislation ought to distinguish between the traditional-criminal intent and the causal relevance of the algorithms. Later introduced variant of mental element, e.g. recklessness or negligence of humans controlling the usage of AI, should be highlighted, not the need of AI having intent.
  • Expand Corporate Liability Models: A hybrid attribution model that enables corporate liability via the controlling mind, but also through risk-based paradigms that make organizations responsible in the event that AI governance, risk management, and oversight have failed, should be implemented in the BNS.
  • Codify Strict and Vicarious Liability where AI Systems are high risk: If there is a high likelihood of AI systems causing harm to lives, property or privacy, then strict and vicarious liability should be codified and we should not examine whether there is a mens rea complex or not. Vicarious liability could also be applied to the intermediaries and operators in support of this aim to ensure these parties are also held accountable. 
  • Mandatory Algorithmic Transparency and Auditing: The legislation must ensure that organizations that are using AI in sensitive fields are required to have systems of algorithms that are accounted for through explainable AI systems that regularly carry out audits with the appropriate oversight and documentation in support of investigations and adjudication of algorithmic harm. [12]  
  • Integrate Ethical and Human Rights: AI ethics should be a part of the criminal justice framework to avoid a system that uses AI to judge and discriminate against those of who are a party to a case, as well as not to contravene constitutional human rights (to equality, privacy and due process)  [13]
  • Institute Special Regulatory Entities and Principles: Along with legislative reform there should put in place special regulatory entities to oversee applications of AI as it relates to research, regulation and supervision of principles of algorithmic liability and any measures that claim redress. The incongruities between the traditional principles of criminal law and the role of new algorithms indicate that it is time for the criminal law of India to change, but in a new way, not by changing outgoing codes or statutes on crimes but through new legal infrastructure, creative governance, and similar changes.

The BNS is also modern in many ways, but is still anchored in human-based responsibility constructs. The law should expand its conception of agency and responsibility as new AI systems of action become more and more normative, having the power to mediate or replace human decisions. One way this could play out is to have a two-level legal status of AI-related action where, first, developers and deployers of AI take responsibility for possible defects of action over the foreseeable horizon through different threshold forms of due diligence and accountability functions; or second, the AI systems become or are recognized as an artificial agent through recognition in the law through the definition of a category of autonomous systems that have limited liability and obligations to an entity[14]. This brings in human liability or responsibility for the action taken, but still generally respects the AI’s autonomy to promote safer innovation while not losing the ethical principles that shape justice.

As a last solution, to the responsibility issue presented by AI, the reformulation of Indian criminal law in the Bharatiya Nyaya Sanhita, by rebalancing the doctrinal binaries of Actus reus and mens rea, introducing new corporate liability, and liability and transparency guidelines targeted at AI, is necessary. Only in the conditions of the holistic reform, India will be able to defiance the certitude of the law, promote technological evolution, and conserve the social justice in the age of algorithms.

CONCLUSION

The law on criminal liability is facing a radical test with introduction of artificial intelligence and self-directed algorithms. The most significant step towards reforming the legal system is the new criminal code adopted in India which is called a Bharatiya Nyaya Sanhita (BNS), 2023. It does not suffice in terms of taking into consideration the subtleties of the algorithm of criminal liability, particularly, usage of Actus reus, mens rea and corporate attribution of cases of autonomous technologies.

According to the common law on crime, the criminal law has traditionally been founded in the belief that a criminal act (Actus reus) along with the guilty mind must be established to hold the offender responsible. But in this premise, algorithms, must it exist in any awareness of consciousness lacking intentions of purpose, owe nothing. The BNS model, a copy more or less of the classical criminal law doctrine, is incapable of dealing with the cases of harm, created by AI-based decisions that are not premised on action or intention of an individual, directly. Moreover, the BNS principle of corporate criminal liability follows the high levels of attributing the intent of the person mind to the organization. This is a method which perceives AI as autonomous because it cannot be supervised by a human throughout the day and day out and it would be harder to discover who bears responsibility.

This is precisely where the loophole which proves the need to revamp the concept of liability under the criminal law as it applies to technology. The role of autonomous systems as agents or actors of crime behavior can be the basis on which a clearer legal such can be put in categorizing. In addition, the law needs to have risk-sensitive solutions towards the liability acts such as strict liability in high-risk AI damage, and increase corporate legal responsibility, provided that there is failure in AI project management and oversight.

Since transparency and auditability are all mandatory requirements, this should be added as well, and accountability through explainable AI and permanent control would be made possible. Such ethical principles and the protection of human rights should be grounded in the regulation and enforcement in order to reduce the opportunity of discriminatory or even arbitrariness of the results of the algorithm.  

According to me, the future of criminal responsibility in India concerns such correlation that would result in the appearance of a hybrid construct where human participants would continue to suffer foreseeable human losses pertaining to AI and autonomous systems would be taken as individual entities and receive different legal consideration. It will be a moderate approach towards innovation and justice as the society will not suffer of legal security and technological innovations.

Finally, one would need to mention that, despite the Bharatiya Nyaya Sanhita providing the scheme of the modern criminal jurisprudence, it must still be brought down to the challenges of the 21st century presented by AI. The emergence of AI-related standards of liability, the dispersion of principles of corporate blaming, and the need to adhere to transparency with the safeguard of justice and fairness in the more computerized world.

References

[1] Cindy A. Frey-Brown, ‘Preserving the Rule of Law in the Era of Artificial Intelligence (AI)’ (2021) 30 Artificial Intelligence and Law 291, 306.

[2] Criminal intent (Legal Information Institute, Cornell Law School) (“Criminal intent”) https://www.law.cornell.edu/wex/criminal_intent accessed 22 September 2025.

[3] R Abbott and C Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction? (UC Davis Law Review) (sentence extracted)

[4] D. Petković et al, It is not “accuracy vs. explainability” — we need both for trustworthy AI systems (2022) arXiv https://arxiv.org/abs/2212.11136 accessed 23 September 2025.

[5] Artificial intelligence: Challenges in criminal and civil liability (2024) (article).

[6] Standard chartered Bank v Directorate of enforcement (2005) 4 SCC 530, para 29

[7] Standard chartered Bank v Directorate of enforcement (2005) 4 SCC 530, para 29

[8] Bharatiya Nyaya Sanhita 2023

[9] Dennis Hennig, ‘Why AI Is a Threat to the Rule of Law’ (2022) 1 Digital Society art-10.

[10] Martina Merz et al, ‘Ethical and Legal Responsibility for Artificial Intelligence’ (2021) 1 Discover Artificial Intelligence 2.

[11] Ryan Abbott and Alexander Sarch, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction?’ (2019) 53 UC Davis Law Review 323

[12] Brent Mittelstadt et al, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data & Society 1.

[13] Justice KS Puttaswamy (Retd) v Union of India (2017) 10 SCC 1 (SC) (privacy as a fundamental right).

[14] Thomas Burri, ‘Accountability in the Age of AI: Artificial Agents and the Attribution of Liability’ (2021) 12 European Journal of Risk Regulation 10.a

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top