Published on 5th February 2025
Authored By: Aastha Sameer Chawan
Shriman Bhagojiseth Keer Law College
Introduction
AI and ML technologies have advanced extremely rapidly to change the face of present times through technologies with transformative impacts on the life of humans, affecting healthcare and finance, transportation and entertainment. Such technologies open vast vistas of opportunities while posing serious legal and ethical challenges as well. It explains the major legal issues related to AI and ML, such as: regulatory gaps, liability, data protection, bias and discrimination, IP, and ethical considerations.
AI and ML are enabling what once seemed to be the domain of exclusive human intelligence from self-driving cars to virtual assistants. As AI and ML mature, though, so too do a litany of serious legal and ethical conundrums which require immediate scrutiny. Such issues of accountability, privacy or bias (and related IP rights challenge) entail us to reconsider long-established laws as applicable, or not.
Dealing with the subtle nuances of AI and ML issues Challenges straddling regulatory frameworks, liability, data protection, discrimination (or biased algorithms), intellectual property concerns including copyrightability over algorithm or any other IP generated through machine learning processes; competition law implications due to excessive sharing/ hoarding of knowledge on risk accumulation by either deploying these emerging technologies in businesses what has not happened so far as this domain is still nascent) ; cyber security risks from hackers who exploit our dependence even though there were cases heard where some countries officially declared civilian infrastructures attacked leading an understanding about both technical & strategic war confrontations say if some bad actor hacked into systems controlling smart cities – people die was meant attack economy ways than one may easily assume then find themselves caught napping possibly releasing silos created hide own laziness under new labels such Engagement Is Everything , everything we do need streamlined incorporating newer paradigms. Solving these challenges avoids harm and ensures that such technologies are developed as responsible tools for the common good.
Regulatory Gaps: Playing Catch-Up with Innovation
- Why Regulation Is Falling Behind
AI and ML are advancing so quickly that laws can no longer even pretend to keep up. Most countries do not have a well-thought-through framework to steer these technologies; regulators are then left trying to catch up with the collateral effects. Existing laws like the EU’s GDPR or US sectoral-specific regulations touch parts of the AI ecosystem; however they leave the question of who should be blamed if AI fails.
For example, the European Union is boldly stepping up with its proposed Artificial Intelligence Act, where AI systems are classified according to levels of risks and new tighter rules are introduced for biometric surveillance or self-driving vehicles.1 However, such efforts are scarce, and outside Europe, regulation remains piecemeal. Such heterogeneity makes cross border compliance quite challenging, a critical issue in today’s interconnected world.
- Defining AI for Legal Purposes
Another is the problem of definitional meaning when determining what “AI” is legally speaking. Unlike more tangible products, AI systems are dynamic and constantly learning and adapting over time. In this respect, the most recent UK approach to regulation indicates a need to move away from strict definitions and toward impacts or outcomes of the AI systems being developed to make laws more flexible. Yet such flexibility may also blur lines in enforcement.
- Regulatory Sandboxes: A Trial Run for Laws
To minimise the risks of AI while promoting innovation, a number of governments are experimenting with “regulatory sandboxes.” These frameworks allow companies to test AI systems under close supervision, identifying risks before full-scale deployment. The UK’s sandbox for financial technologies is one notable example. However such tools remain experimental and lack global standardisation, making them a stepping stone rather than a permanent solution.
- Regulatory Gaps and Challenges
Developments of AI and ML seem to move faster than current laws that oblige, thereby revealing gaps that existing lawmaking bodies face.
The AI systems make decisions on their own; no human intervention in the decision-making process surfaces accountability and control as a question.
- Lack of Holistic Regulation
Currently, no worldwide uniform framework regulates AI and ML. Existing laws, as in the case of GDPR in the EU, address particular issues but have not established a more broad stance in AI-related issues. The European Commission has put forward an AI Act to fill this gap by evaluating AI systems on different risk levels.1 Such an exercise is still ongoing and reflects a delay in the evolutionary process of regulation.
- Balancing Innovation and Regulation
The delicate balance between innovation being encouraged and strict regulation is one of the greatest concerns. Too much regulation may throttle technological progress, while less may lead to misuse and unintended consequences.
- Liability and Responsibility
One of the most contentious legal questions is who should be held liable for AI-driven decisions or actions. In this sense, conventional liability frameworks, such as negligence or strict liability, often do not fit well with AI systems.
- Autonomous Decisions
They can make decisions independently, which means that no particular entity directly bears the liability. For example, accidents involving autonomous automobiles show who could be liable: the manufacturer, the developer of the software, or the user 2.
- Product Liability
Currently under product liability law, defects in AI may be attributed to the manufacturer. But with such subtlety, distinguishing between a defect in a product and a defect in the training data of an algorithm is extremely difficult. To that end, the EU has recently reviewed its Product Liability Directive to address issues such as liability rules for AI.
- Data Protection and Privacy
AI and ML require tremendous amounts of data. All these raise significant questions about data protection and privacy. The GDPR is an important framework but becomes difficult to apply in AI systems.
- Data Minimisation and Purpose Limitation
In many systems of AI, massive amounts of data are always required, which potentially challenges the GDPR’s data minimization and purpose limitation principles. It would become a trade-off between the two: principles and the requirements of data-driven innovation.
- Algorithmic Transparency
The GDPR provides rights in the realm of automated decision-making, including the right to an explanation. Of course, achieving algorithmic transparency and interpretability is technically challenging, particularly for complex ML models.
- Bias and Discrimination
Bias in AI develops because of biassed training data, which then leads to discriminatory results. Anti Discrimination laws become relevant.
- Case Studies of Bias
More dramatic examples of AI bias include the following. A hiring-related product developed by a high tech company ended up utilising most resumes from male applicants and was, thus biassed against females.
- Legal Responses
Base for challenging biassed AI systems. The UK Equality Act 2010 is an anti-discrimination law. Non-clear and complex nature makes it difficult to prove discrimination in AI systems.
- Intellectual Property Challenges
AI systems generate original works, leading to issues of IP ownership and protection.
- Copyright for AI-Generated Works
Many jurisdictions require human authorship under copyright law, making AI-generated works unprotected. To date, the Copyright, Designs and Patents Act 1988 only mentions provision for computer-generated works in the UK, its application to modern AI being uncertain.
- Patentability
The other contentious issue is whether inventions generated by AI can be patented. In fact, in cases such as that of DABUS, there is the reluctance of patent offices to accept the inventor status attributed to AI.
- Ethical Considerations and Governance
In addition to the regulatory frameworks, the issues of ethical considerations in AI and ML, as explained above, demand governance initiatives.
Who Is Liable When AI Goes Wrong? Liability and Accountability
- The Complexity of Accountability
The decisions made by AI systems are autonomous, hence raising the question: who is liable in case of an error? For instance, in the case of self-driving cars. When an AI-equipped car crashes, is it at the fault of the manufacturer, the developer of the software, or the car owner? A true story concerning Uber’s self-driving car, which killed a pedestrian in 2018, put an end to all the present conjectures and showed the insufficiency of the legislation.
- Strict Liability vs. Negligence
Scholars are debating how best to impose traditional liability principles on AI. Strict liability which imposes liability simply because of being a manufacturer of awareness generating systems would spur the development of safer systems, while negligence requires at least some demonstration of particular failure, such as a failure to exercise reasonable care in developing AI. The European Union is reviewing its Product Liability Directive to account for these new questions.
- Employers and Vicarious Liability
Employers who employ AI systems in hiring decisions or managing employees will also be liable for the actions of their AI. An example of this occurs when an algorithmic recruitment system discriminates unfairly against a candidate. A company under that system could then find themselves subject to the UK’s Equality Act 2010. The above issues demand greater transparency in AI-based decision-making processes.
The Data Ownership Versus Privacy Tug-of-War
- Data Dependence of AI
AI systems work through large amounts of data and learn how to operate most effectively. But this dependency runs afoul of privacy laws especially in the EU which takes the carve out for purposes of scientific research akin to an invitation to create a new GDPR exception on grounds such as public interest/context and legitimate interests where data uses appear at odds with trespassing legislated limits related specifically, inter alia straight up prohibition from processing sensitive biometric data. This would be in direct conflict not just with sensible best practices about doing so while protecting privacy, but also very important principles that are common sense for anyone who understands the nature of AI: it requires vast diverse datasets.
Article 22 GDPR provides a right for data subjects not to be subject to decisions being taken solely by automated means. If a bank, for example, declined to give you a loan due to an artificial intelligence assessment that decision-making process is subject to explanation. However, even though it is useful in a number of ways to become able to produce clean and simple explanations for complex machine-learning algorithms there remains an important technical challenge.
- Cross-Border Data Transfers
The seasoned politicians who agreed to the article brought up issues about jurisdiction and what laws they were in compliance with. Oops for that to disrupt the Digital Economy, but mechanisms like Standard Contractual Clauses are a response problem in and of themselves.
Intellectual Property Who Owns AI Creations?
- Copyright and AI-Generated Works
AI is producing music, artworks, and even literature at incredible speeds. Such productions raise prickly questions over who owns the copyrights. In most places, copyright requires that authorship be human, thus putting the creations of AI in a state of limbo.
Copyright, Designs and Patents Act 1988 of the UK exempted a little ground on the issue of computer-generated works but this feels out of date in modern AI.
- Patents and Inventorship
The same is the case with patent law. The widely discussed DABUS case shows how courts in the US and EU refused to grant patents for AI-generated inventions on grounds that only a human can be an inventor. Such judicial decisions reflect how IP laws need a much broader rewiring.
Ethical AI Principles
Multiple entities have formulated ethical AI principles; more are upcoming. These include FAT- fairness, accountability, and transparency. Most of these principles are non-binding, suggesting that the need is for some binding regulations.
- Governance Models
New governance models such as AI ethics boards and regulatory sandboxes have been envisaged in the light of the ethical issues that promote innovation.
- International Cooperation
The global nature of AI requires international cooperation for setting consistent legal frameworks and addressing cross-border challenges.
- Cross-Border Data Flows
International transfers of personal data are essential for AI systems, which rely on mechanisms such as SCCs. Due to the diversity in privacy law, legal clarity is lacking.10
- Global Standards
The OECD and the UN are working towards common AI standards globally, fuelling responsible innovation.11
Conclusion
The rapid development of AI and ML creates formidable legal challenges ranging from gaps in regulatory oversight, liability to data protection, bias, and IP. Laws therefore need modification to accommodate these matters and adapt them for the peculiar natures of these technologies. International cooperation and new models of governance complemented with long-drawn legal frameworks should feature significantly in the unlocking of AI and ML while at the same time mitigating risks.
This can be done by proactively meeting these challenges in the light of legal obligations so that societies can be sure AI and ML technologies develop responsibly, benefiting humankind, while safeguarding key rights.
Â
Reference(s):
- European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) COM (2021) 206 final, https://ec.europa.eu/info/business-economy-euro/banking-and-finance/financial-services-consumer-protection/financial-services-implementation-measure/legislative-proposals_en accessed 19 November 2024.
- US Department of Commerce, National Artificial Intelligence Initiative Act of 2020 (2020), https://www.congress.gov/bill/116th-congress/house-bill/6216/text accessed 19 November 2024.
- Department for Digital, Culture, Media & Sport, Establishing a Pro-Innovation Approach to Regulating AI (2022), https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai accessed 19 November 2024.
- Financial Conduct Authority, Regulatory Sandbox (2021), https://www.fca.org.uk/firms/regulatory-sandbox accessed 19 November 2024.
- T. Smith, ‘Who Is to Blame for Autonomous Car Accidents?’ (2019) 45 Harvard J L Tech 123.
- European Commission, Liability for Artificial Intelligence and Other Emerging Digital Technologies (2022), https://ec.europa.eu/info/law/law-topic/data-protection_en accessed 19 November 2024.
- J. Robinson, ‘Vicarious Liability in the Age of AI’ (2021) 37 Modern L Rev 89.
- US Department of Justice, CLOUD Act (2018), <https://www.justice.gov/criminal-ccips/clarifying-lawful- overseas-use-of-data-act> accessed 19 November 2024.
Â
Â