Published on: 29th October 2025
Authored by: Nandini
ABSTRACT
With the fast growing population AI has become an integral part of our life. With the use of AI in enabling significant advances in various fields of education such as healthcare, finance, businesses etc it has now become an inseparable part of our life. Where our day to day activities are relied on artificial intelligence which simplifies our life it is important to note that the extensive use of AI decision making systems, there is an increase in severe legal responsibility and accountability concerns. Accountability in AI decision making is an important concept as it deals with social, moral, ethical and legal implications of artificial intelligence in various applications. In the modern times the artificial intelligence decision making systems is becoming more autonomous, transparent and integrated and thus ensuring accountability is becoming a major challenge. This article elaborates the challenges of accountability in autonomous decision making systems in AI, and focusing on the tension between technological development and accountability.
INTRODUCTION
In the recent years there has been an increasing demand of the AI decision making systems whether it’s in healthcare, business, transportation etc. AI has turned to very useful as it can analyse the complex data and check out any error that human may miss out or whether we talk about its use in processing data, taking decisions quickly thereby enabling real time decision making systems in various sectors such as logistics. We have seen a constant shift towards AI in the recent years thereby, marking the biggest development in 21st century, AI can be found in healthcare sectors to assist in clinically diagnosing diseases, recommending treatments etc, AI can even be found in finance sector to calculate risk factors, credit creation, fraud detection etc, it is now also used in the in the legal fields for facial recognition, criminal risk assessment thereby helping judiciary and legislature to ensure public safety and support in providing efficient legal judgments. Thus, we can see AI being used in every sectors and its advancement and versatility from healthcare to the development of automated cars. But with the advancement of AI the biggest question that arises in the mind of the people is about legal responsibility and accountability with the extensive use of AI. It is difficult to place legal responsibility over artificial intelligence. The rules so developed and present are designed to protect individuals from the wrong committed by people and are not sufficient to deal with artificial intelligence. Therefore, a major question arises on who should be held responsible in case of AI failure or when AI systems injure someone. It is difficult to address this legal void that is created.
Decisions made by AI must adhere to legal rules and regulations. If the AI is penetrating into the crucial parts of the society then it is necessary that they abide by the legal and ethical framework to ensure and maintain public safety. The use of AI without clear standards and measure of accountability creates risk for privacy invasion, biased decisions etc. So, to reduce such risks and promote ethical use of AI thorough legal standards and accountability structures must be created.[1]
- Definition of AI
Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.[2]
Ai is governed with machine learning technique which involves creating models with the use of algorithms to make decisions and make predictions as per the data provided to them. It has various functions such as it enables computers to learn and make inferences based on data without being explicitly programmed to do so.
There are various types of machine learning techniques which include:
- Neural Networks (or artificial neural networks) – Neural networks are modelled after the human brain’s structure and function. A neural network consists of interconnected layers of nodes (analogous to neurons) that work together to process and analyze complex data.[3]
- Supervised learning – It is the simplest form of machine learning. Supervised learning, which involves the use of labelled data sets to train algorithms to classify data or predict outcomes accurately. In supervised learning, humans pair each training example with an output label.[4]
- Importance of accountability in AI
With the ever increasing demand and development of AI in the present days accountability towards AI decision making plays a very crucial role to ensure safety, transparency and ethical use of AI. Accountability refers to the responsibility of the entities behind AI systems they can be owners, shareholders, developers or policymakers who take into consideration that the AI decision making systems are developed with transparency and keeping in mind the ethical standards. AI’s involvement in healthcare sectors, finance, law enforcement etc that are concerned with the lives of innocent people demands the accountability measures.
It is important for the AI decision making systems to be transparent and can be trusted by its users and the general public. I t is also important to ensure that the accountability is not only efficient but also fair and non discriminatory.
- Legal consequences of the AI decision making systems
There is a most important question that arises with the use of the Ai decision making systems that revolves around the topic about who should be held liable if there is wrong or harm committed by the AI decision making systems. This question creates the legal void as it raises a question as to who should be held liable – developers, organisers or the AI itself. The absence of the clear accountability hinders the use of AI decision making systems and its smooth usage within the society.
Though there are no specific regulatory frameworks in India solely dedicated for AI. However to ensure AI development take place under ethical guidelines and addresses key legal concepts existing legislation such as “Information Technology Act, 2000”, the “Digital Personal Data Protection Act, 2023”, and the “Information Technology Rules, 2021” play a crucial role in overseeing AI activities.
1.3.1 Liability for harm cause by AI systems
Legal accountability as discussed earlier includes as to who should be held liable in case of harm or damage caused due to AI systems. This may include various areas of law such as negligence, law of torts or product liability.
Under product liability laws, manufacturers and developers may be held accountable if an AI system is deemed defective and causes harm.[5]
In cases of negligence, AI developers may be found responsible if they fail to meet established standards of care in the design and deployment of their systems, leading to harmful outcomes. [6]
“In March 2018, an autonomous Uber vehicle struck and killed a pedestrian in Arizona. The case raised critical questions about the legal liability of self-driving cars. Ultimately, Uber’s self-driving car program faced scrutiny for not meeting safety standards, and the incident highlighted the need for clearer regulations regarding the deployment of autonomous vehicles.”[7]
- Ethical implications with the use of AI decision making systems
As per the ethical point of view for the AI systems it must be ensured that the AI systems are fair, non discriminatory and protects the individual rights. These topics become very necessary to be discussed while dealing with the AI systems as the AI systems posses the capacity of affecting an individual’s life especially when they are included in extremely sensitive areas such as healthcare, legal field etc hence the use of AI must be performed in such a way so as to avoid discrimination such as biased hiring or unequal access to services.
- Operational Accountability with the use of AI
Operational accountability focuses on the internal mechanism within the organisation that is concerned with the development and deployment of the AI systems. This includes governing the AI systems are being used in compliance with the legal and ethical standards. It also includes clear roles for the stakeholders who are involved with the development of the AI systems. These stakeholders may include developers, organisers and even the external auditors. This is very important to ensure on whom to place the liability when any damage is caused by the AI systems. One of the key features of the operational accountability is transparency. Transparency is essential for gaining public trust especially in the extremely sensitive areas where AI is used such as healthcare, law enforcement areas etc that can largely impact people’s lives.
1.6 Challenges of Accountability in autonomous decision making systems
In the modern times there is no doubt that the AI is penetrating in all almost all the sectors to improve decision making in areas such as healthcare, finance, law etc. But with its increasing benefits it imparts various challenges in its usage these challenges includes complexity of AI models, bias and discrimination that may crate accountability gaps. Each of these challenges have to be addressed properly to ensure that the AI decision making systems are fair, ethical, unbiased and non discriminatory.
1.6.1 Black Box problems relating to AI
One of the major problems concerning the AI operations is the black box nature of many AI models especially in the deep learning systems. The major problem the programmers of the deep learning systems faces is that are they are difficult for even the creators to fully understand or explain it.
1.6.2 Bias and Discrimination
Bias in AI is a pervasive and critical issue that complicates the goal of accountability. AI models are often trained on historical data, which may contain biases reflecting societal prejudices or unequal treatment of certain groups.[8]
1.6.3 Lack of Human Oversight in autonomous decision making systems
One of the distinctive features of AI decision making systems is its capability to make autonomous decisions without the use of human intervention such as automated vehicles, self disease diagnosing capability of AI in medical field. This feature of AI supports quick and speedy performance but it also imposes concerns about accountability when it has the capability of affecting an individual or society at large. It is for this human oversight is necessary for ensuring accountability in AI decision making.
CONCLUSION
Rapid advancement in AI is the key aspect of 21st century. We can see AI decision making systems in almost every field from healthcare in diagnosing disease to automated vehicles or law enforcement to analysing, processing data complex data quickly. AI though it is evolving in almost all the field but raises a question in the mind of people about the accountability of the AI decision making systems which concerns with who should be held liable if it causes damage or harm to individuals. The question of accountability becomes very important to be addressed when we can see AI involving even in the sensitive areas that can largely impact lives of people. Accountability in AI has various challenges such as complexity of AI models, bias and discrimination, black box problem that may crate accountability gaps. These challenges are to be addressed properly to ensure that the AI decision making systems are fair, ethical, unbiased and non discriminatory.
[1] Reuben Binns, ‘Fairness in Machine Learning: Lessons from Political Philosophy’(2018) 81 Proceedings of Machine Learning Research <https://proceedings.mlr.press/v81/binns18a/binns18a.pdf> accessed 09 June 2025
[2] Cole Stryker, ‘What is artificial intelligence (AI)?’ (IBM, 9 August 2024) <https://www.ibm.com/think/topics/artificial-intelligence> accessed 09 June 2025
[3] Ibid
[4] Ibid
[5] O.E. Olorunniwo et al, ‘Accountability in AI Decision-Making’ (2025) <https://www.researchgate.net/publication/390668560_Accountability_in_AI_Decision-Making> accessed 09 June 2025
[6] Ibid
[7] Ibid
[8] Ibid




