Published On: 26th May 2025
Authored By: Kabir Sindhi
GLS UNIVERSITY
Abstract
The legal challenges arising from the rapid advancement of Artificial Intelligence (AI) and Machine Learning (ML) have left existing laws struggling to keep up. A major issue is figuring out who is responsible when AI systems cause harm, like in accidents with self-driving cars. The AI Act of the EU depicts the fact that the new regulations are beginning to transform. Another great problem that exists is the AI systems which particularly depend on data facing the privacy restrictions of the GDPR. There are also legal disputes over the question of who owns things created through AI and rights to that thing because current copyright laws don’t really cover this phenomenon. Legal measures to address the ethical problems, like AI biases and transparency lacks, are needed as well. The article is also concerned with the fact that AI regulations are different in different parts of the world, with the AI induced changes in the job sector, and with the national cybersecurity problems that exist. In the final paragraph, the writer proposes the idea of dynamic, collaborative legal mechanisms to support the integration of AI so as to protect innovation and human rights in different regions and types of use.
1.Introduction
When an AI system makes a mistake let’s say, a self-driving car causes an accident it’s unclear who should be held legally responsible for it the developer, the company which is using the technology, or the machine itself.
The rapid development of Artificial Intelligence (AI) and Machine Learning (ML) technologies has brought about a new category of legal and rulemaking questions that existing frameworks are unable to handle. AI is becoming progressively involved in vital sectors in health, finance, law enforcement, and education, which are characterized by the complexity, autonomy, and opacity of AI/ML, requiring the development of new legal solutions.
This study shows many-sided legal problems about AI and machine learning. It reviews the current rules and regulations, along with concerns about responsibility, intellectual property, the problem of algorithmic bias, and data privacy implications, but also finding optimal ways to have a mixed governance system.
2. Evolving Regulatory Landscape
The rules and regulations related to AI are affecting every nation in the world rapidly, but not in the same way. The European Union is leading the way with its extensive AI Act, described as the “first legal framework of its kind for AI.” This groundbreaking law sorts AI systems into different categories based on how much risk they pose, putting Europe at the forefront of worldwide AI regulation.[1] This pioneering legislation sets different rules for AI systems, all depending on the possible effects they could have, resulting in a layered method of oversight.
AI is, however, too complex an entity to be successfully controlled due to the so-called “diverse” nature of the ways of machine learning algorithms. Those algorithms may be any form of design, function, and application characterized by a myriad of differences—ranging from transportation and healthcare to social media and marketing. In addition, technology is phasing out with identical algorithms that may perform differently from time to time to process a different set of data, both of which are evolving constantly. This unavoidable fact clearly shows that no regulation can be implemented for the whole AI system using a single approach.
The complexity of AI governance means that the regulatory authority will be disseminated in this subject, with authorities stating that “multiple regulators need to explore machine learning applications within their domain of expertise.” Specialists of such expert agencies can also struggle significantly with impairments and are tasked with a flexible approach to AI regulations, which can be achieved via the use of well-grounded data science resources and adopting dynamic regulatory forms such as command and control oversight.[2]
3. The Delicate Balance of Data Privacy and AI Advancement
Artificial intelligence and machine learning are essentially data-driven types of technology, which generally depend on the processing of huge amounts of data, including personal and sensitive information, to work properly and reach their expected goals.[3] This dependence on extensive data puts the issue of data privacy and data protection at the top of the list of legal issues associated with AI and ML.
Throughout the world, there are a number of data protection laws which have been enacted to control personal data collection, processing, and storage, each with its own specific features and requirements. Examples the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. India also recently executed the Digital Personal Data Protection Act (DPDPA) in 2023 to adjust to this new regulation, which represented a strong move towards the establishment of a complete legal framework for the protection of personal data while at the same time aiming at promoting technological innovation in the country. [4] Applying existing data privacy laws like GDPR and DPDPA to AI presents unique hurdles. When AI systems make decisions using personal information (e.g., in healthcare or hiring), ensuring individuals’ rights—like the right to understand how their data is used and to control it—becomes complex. Companies must maintain transparency about data collection and usage, especially for AI training. Companies must be clear about what data they collect and how they use it, especially if the data is used to train an AI system that could affect people’s lives (for example, in healthcare or job recruitment).
These laws together provide individuals with a variety of rights related to their personal data and expose companies to responsibilities, which are related to their data processing activities, especially in connection with AI and ML. The developments are such that, for the most part, some of these privacy rules were set up well before machine learning existed, and hence they may not be very suitable for addressing the specific data processing use cases related to AI.
A significant challenge is the “black box” problem: AI’s ability to learn and adapt independently often makes tracing the exact reasoning behind a specific decision difficult. This lack of clear explainability complicates efforts to determine responsibility when an AI system causes harm or makes an unfair decision.[5]
4. Intellectual Property Right in the Age of AI
AI and machine learning algorithms often produce valuable intellectual property. India’s existing intellectual property rules must develop to accommodate these innovations, distinguishing between human-generated inventions and machine-generated inventions.
In 2022, GitHub Copilot[6] was a defendant in a class-action lawsuit with a violation of intellectual property laws for its training on open-source code. In June 2024, a California Federal Court finally threw out almost all of the allegations, stating that it was not “too close” as far as copyright was concerned. The remaining claims were only breach of contract and open-source license violation claims that they still had, which could potentially be the centrepiece for AI training data usage in the future.
Current copyright laws struggle to address AI-generated works. Traditional frameworks assume human authorship, leaving ambiguity over whether rights belong to developers, users, or AI systems themselves. For instance, tools like GitHub Copilot which are open-source code, sparked lawsuits alleging violation of software licensing terms by failing to attribute original creators. In the same way, creators have filed lawsuits against image generators such as Stable
Diffusion for the act of using copyrighted graphics without consent.[7]
4.1 Emerging Solutions:
- The EU’s proposed Text and Data Mining (TDM) exception is a means that companies can use copyrighted material for AI training on a commercial basis if the rights holders object to it.8
- India’s IT Act (2000) and Contract Act (1872) are not up to date to reflect some of the aspects of digital technology, especially the one on machine-generated IP; that is why lawmakers need to change the definition for “creators” in the Digital India era.[8]
5. Liability in AI Systems: Who is Responsible?
A. World’s first defamation lawsuit over ChatGPT content in Australia highlights the risks of AI-generated misinformation, in which Mayor Brian Hood of Hepburn Shire plans to sue OpenAI for defamation after ChatGPT falsely claimed he had served prison for bribery, but in reality, he had reported bribery rather than participating. This potential lawsuit could set a precedent for defamation law related to AI-generated content.[9]
B. In Md Zakir Hussain vs State of Manipur, the Manipur High Court had occasion to consider the termination from service of a VDF member. Although it cited ChatGPT prominently to provide context on the VDF’s standing (emphasis added), the court’s underlying rationale centered squarely around well-established legal concepts. The court stressed that Hussain’s removal is an act against natural justice, as he was not afforded with a reasonable opportunity to explain his stand on the allegations against him before the impugned order was passed.[10] This case is significant not just for its novel mention of AI in a judicial context, but more critically, for reaffirming that technological aids cannot supersede fundamental procedural rights. The court’s reliance on the bedrock principle of audi alteram partem (hear the other side) underscores that fairness remains paramount, irrespective of the tools used in legal proceedings.
C. In March 2018, an Uber autonomous vehicle caused the first death linked to selfdriving technology when it struck a pedestrian in Arizona after misclassifying her, sparking a debate about liability among Uber, software developers, the safety driver, and the pedestrian. The incident starkly revealed how traditional legal frameworks for assigning blame (tort law) struggle to address harm caused by autonomous systems. Determining fault involved complex questions about the responsibilities of Uber, the software developers, the human safety driver, and even the pedestrian’s actions. This highlights the urgent need for updated legal mechanisms specifically designed to allocate responsibility when complex, autonomous AI systems are involved in causing damage or injury. This particular one can be counted among the examples of the overshadowing of the technocratic and apolitical aspect of such projects by the tragic and human side.[11]
6. Ethical Considerations and Their Legal Implications: Shaping Future Frameworks
Technological advancements pose a challenge with regard to AI bias, responsible accountability, and transparency in the legal arena. Lawyers must make sure the adoption of AI is in accordance with professional conduct by holding reserves in the law of the state. One of these ethical rules is competence, which is mentioned by Reuters’ “Ethical considerations in the use of AI.” [12]While legal scrutiny is driven by AI bias that replicates discrimination, requirements that AI include transparency, such as explainability, are imposed by the EU AI Act’s rules.
The American Bar Association and some other entities are embarking on this journey, making sure that AI is partnered with people; that did not exist before, as told by Clio in the article “AI and Law: What are the Ethical Considerations?”[13] Ethical considerations that include concerns related to data security and accountability frame new rules that let AI be intrinsically subservient to society, which became possible through “Above the Law: The Ethical Implications of Artificial Intelligence.”[14]
7. Global Governance and Cross-Border AI Regulation
Global AI regulation is diverse, and the EU AI Act has become the de facto standard for risk assessment as well as transparency regulations. The so-called “Brussels Effect” led such nations as Brazil, South Korea, and Canada to adopt similar policies based on EU regulations, particularly in the area of forbidding destructive technologies, such as emotion recognition in workspaces[15]. However, the following significant differences exist
- S. vs. EU: The EU, which considers risk reduction to be a primary concern, takes into account a pre-deployment strategy for AI technologies, whereas the U.S. strategy is based on NIST’s AI Risk Management Framework and local laws. This sometimes makes it difficult for multinational corporations to comply with different rules.
- Asia-Pacific: Japan and Singapore prefer regulations that are favourable to innovation, as opposed to China, which has strict regulations for public opinion AI models under state control.[16]
The 2025 Paris AI Action Summit was an occasion that brought up differences of views on the future regulatory environment, with the United Nations backing the belief in international cooperation and the U.S. opposing the measures as being “overregulation.” At the same time, the situation is better in the Latin America and the Caribbean subregion, as Brazil’s AI Bill (PL 2338/2023) calls for companies to evaluate the impact of their algorithms, while Peru is leading the region with the guidelines on AI development being mandatory.[17]
8. AI’s Impact on Employment and Labor Laws
The impact of AI on the workforce is such that the World Economic Forum projected 58 million more jobs by the end of year 2025 despite the fact that the production and administrative positions are displaced. [18]
8.1 Algorithms Making Distinction:
- US courts penalized Amazon in 2024 for its hiring AI that eliminated resumes mentioning a “women’s organization
- With the CPRA in California has now implemented the rights for employees to audit the AI drive recruitment tools for bias, while New York has arrogated transparency to the automatic promotion process.[19]
8.2 Workforce Transition:
Recently, Meta, in 2025, had layoffs (5% of staff) coincided with aggressive hiring for AI engineering roles, highlighting the skills gap.[20]
The European Union has AI Liability Directive (2025) requires firms displacing workers to fund retraining programs in AI-augmented fields like cybersecurity and healthcare analytics.
9. AI and Cybersecurity: Legal Challenges and National Security
The EU AI Act mandates strict requirements for high-risk AI systems, which includes a thirdparty audit that is mandatory for cybersecurity tools utilized in critical infrastructure.
9.1 Supply Chain Vulnerabilities:
According to the Punter Southall Law, the EU contract clauses now oblige to disclose the suppliers where they get their data for training and the exact location where the processing is taking place to minimize the risk of a data breach. Over 60% of 2024 AI breaches were caused by third-party vendors being compromised.[21]
9.2 National Security:
A joint Google, Inc., and the U.S. Task Force Lima (2025) model to harden AI against adversarial attacks, depicts the growing contact of the use of AI-powered disinformation in campaigns that use ML and AI for speech and video generation.[22]
Under the UK Online Safety Act, an organization is fined up to £18 million for not removing AI-generated deepfakes within 24 hours.[23]
9.3 Litigation Risks:
Now, CISOs can be held personally liable if they don’t fix the AI vulnerabilities as per the EU’s amended NIS2 Directive, which in turn could result in fines of up to €10 million.[24]
10. Conclusion
AI and ML technologies have aroused legal disputes due to their independence and rapid development that implies surpassing of conventional legal constructs. Europe’s AI Act is the first risk-based regulation initiative and its triumph will be contingent upon the pliability of its application in different contexts. Key challenges include determining who is legally responsible when AI systems cause harm, balancing technological innovation with data privacy rights under regulations like GDPR, and adapting intellectual property laws to account for AIgenerated creations. Bias and transparency are the major ethical issues that call for the introduction of a legal obligation. The global arbitrage between the EU and the US, i.e., strict and open approaches to the issue of AI, demonstrates a necessity for international cooperation. Tailored, sector-specific regulations, updated legal definitions, and strict liability for high-risk systems are solutions to these challenges. Enhanced transparency in vital applications, and policymakers’ education, is fundamental. The use of collaborative, adaptive legal frameworks which involve law personnel, technologists, and ethicists is a guarantee of the AI merging in a way that protects rights in various jurisdictions and at the same time produces maximum benefits.
Reference(s):
[1] European Commission, ‘Regulatory framework for AI’ https://digitalstrategy.ec.europa.eu/en/policies/regulatory–framework–ai accessed 19 March 2025.
[2] SSRN, Regulating Machine Learning: The Challenge of Heterogeneity
’ https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4368604 accessed 19 March 2025.
[3] Nat Law Rev, ‘Growth AI Law: Exploring legal challenges of artificial intelligence’ https://natlawreview.com/article/growth–ai–law–exploring–legal–challenges–artificial–intelligence accessed 17 March 2025
[4] WalkMe, ‘AI legal issues’ https://www.walkme.com/blog/ai–legal–issues/ accessed 20 March 2025.
[5] SRA International, ‘Spotlight: Hot topics in research law – AI issues par’ https://www.srainternational.org/blogs/srai–news/2024/05/08/spotlight–hot–topics–in–research–law–ai–issuespar accessed 20 March 2025.
[6] Saveri Law Firm, ‘GitHub Copilot intellectual property litigation’ https://www.saverilawfirm.com/ourcases/github–copilot–intellectual–property–litigation accessed 22 March 2025.
[7] MIT Sloan, ‘Legal issues presented by generative AI’ https://mitsloan.mit.edu/ideas–made–to–matter/legalissues–presented–generative–ai accessed 17 March 2025. 8 Dentons, ‘AI trends for 2025: IP protection and enforcement’ https://www.dentons.com/en/insights/articles/2025/january/10/ai–trends–for–2025–ip–protection–andenforcement accessed 19 March 2025.
[8] LinkedIn, ‘Legal framework: artificial intelligence and machine learning’ https://www.linkedin.com/pulse/legal–framework–artificial–intelligence–machine–learning–affan/ accessed 21 March 2025.
[9] Reuters, ‘Australian mayor readies world’s first defamation lawsuit over ChatGPT content’ https://www.reuters.com/technology/australian–mayor–readies–worlds–first–defamation–lawsuit–over–chatgptcontent–2023–04–05/ accessed 23 March 2025.
[10] LinkedIn, ‘Md Zakir Hussain vs State of Manipur’ https://www.linkedin.com/posts/lexintrigued_md–zakirhussain–vs–state–of–manipur–activity–7199642404801454082–K7Vo/ accessed 23 March 2025.
[11] Wired, ‘Uber’s fatal self-driving car crash: saga over operator avoids prison’ https://www.wired.com/story/ubers–fatal–self–driving–car–crash–saga–over–operator–avoids–prison/ accessed 23 March 2025.
[12] Above the Law, ‘The ethical implications of artificial intelligence’ https://abovethelaw.com/law2020/theethical–implications–of–artificial–intelligence/ accessed 23 March 2025
[13] Clio, ‘Ethics AI for lawyers’ https://www.clio.com/resources/ai–for–lawyers/ethics–ai–law/ accessed 23 March 2025.
[14] Reuters, ‘Ethical considerations use of AI’ https://www.reuters.com/legal/legalindustry/ethicalconsiderations–use–ai–2023–10–02/ accessed 23 March 2025.
[15] GDPR Local, ‘Top 5 AI governance trends for 2025: compliance, ethics and innovation after the Paris AI Action Summit’ https://gdprlocal.com/top–5–ai–governance–trends–for–2025–compliance–ethics–and–innovation–afterthe–paris–ai–action–summit/ accessed 23 March 2025.
[16] Dentons, ‘AI trends for 2025: AI regulation, governance and ethics’ https://www.dentons.com/en/insights/articles/2025/january/10/ai–trends–for–2025–ai–regulation–governanceand–ethics accessed 23 March 2025.
[17] Ibid
[18] Innopharma Education, ‘The impact of AI on job roles, workforce and employment: what you need to know’ https://www.innopharmaeducation.com/blog/the–impact–of–ai–on–job–roles–workforce–and–employment–whatyou–need–to–know accessed 18 March 2025
[19] Labor Law PC, ‘AI in hiring and workplaces: a labour law perspective for 2025’ https://www.laborlawpc.com/blog/ai–in–hiring–and–workplaces–a–labor–law–perspective–for–2025/ accessed 21 March 2025.
[20] Forbes, ‘11 jobs AI could replace in 2025 and 15 jobs that are safe’ https://www.forbes.com/sites/rachelwells/2025/03/10/11–jobs–ai–could–replace–in–2025–and–15–jobs–that–aresafe/ accessed 22 March 2025.
[21] Bank Info Security, ‘AI risks: cybersecurity challenges for 2025’ https://www.bankinfosecurity.com/ai–riskscybersecurity–challenges–for–2025–a–27212 accessed 20 March 2025.
[22] Google, ‘AI and the future of national security’ https://blog.google/technology/safety–security/ai–and–thefuture–of–national–security/ accessed 20 March 2025.
[23] ibid
[24] ibid