Published On: December 5th 2025
Authored By: Qudsia Misbah
SVKM's NMIMS School of Law
Abstract
In the digital era, data protection and privacy are central to human rights discourse. Rapid technological advances—surveillance, big data analytics, ubiquitous connectivity, artificial intelligence, biometrics—have created new risks for individual autonomy, dignity, and freedom. This article explores the framework of international human rights law that protects privacy, examines landmark case law that has shaped data protection, and discusses tensions between state security, commercial interests, and individual rights. It argues that while legal regimes such as the European Union’s GDPR, decisions by the European Court of Human Rights, and constitutional rulings in democracies like India and the United States offer robust protections, there remain lacunae especially concerning emerging technologies. The article concludes with recommendations for strengthening privacy rights in legislation, jurisprudence, and policy.
Introduction
The digital revolution has transformed how personal information is generated, collected, transmitted, stored, and analysed. Smartphones, social media platforms, cloud computing, Internet-of-Things devices, biometric identification, and AI systems all produce or rely upon vast amounts of data. Such developments enhance convenience, connectivity, and efficiency. At the same time, they bring significant risks: arbitrary surveillance, misuse of personal information, data breaches, loss of control over identity, and erosion of privacy and autonomy.
Human rights law has long recognized privacy and data protection as fundamental rights. Yet, legal and institutional frameworks often struggle to keep pace with technological change. Key questions arise: what is the scope of the right to privacy and data protection under international and domestic law? How have courts responded to challenges posed by modern digital technologies? What are the trade-offs between security, law enforcement, commercial uses of data, and human rights? And what reforms are needed to ensure that privacy is meaningfully protected in practice?
Legal and Normative Framework
International human rights instruments establish privacy as a core value. The Universal Declaration of Human Rights in Article 12 states that no one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence.[1] Similarly, the International Covenant on Civil and Political Rights, in Article 17, safeguards individuals from unlawful interference with privacy.[2] Regional instruments such as the European Convention on Human Rights also enshrine privacy rights, particularly under Article 8.
Domestic constitutions and judicial rulings have reinforced this principle. In India, the Supreme Court in Justice K.S. Puttaswamy (Retd.) v. Union of India held in 2017 that the right to privacy is protected as a fundamental right under Article 21 of the Constitution.[3] In Europe, the Charter of Fundamental Rights of the EU recognizes both privacy and data protection as distinct but interconnected rights.
Alongside constitutional and judicial frameworks, statutory and regulatory systems provide the backbone of protections. The General Data Protection Regulation (GDPR) in the European Union remains the most comprehensive global regime, establishing strict rules regarding consent, data minimization, purpose limitation, and rights of individuals to access and erase their data. At the global level, UN reports and OHCHR studies emphasize the continuing evolution of privacy in the digital age.
Case Law: Key Decisions that Shape Privacy in the Digital Age
Judicial bodies across jurisdictions have shaped the contours of digital privacy. In Digital Rights Ireland Ltd v. [4]Minister for Communications (2014), the Court of Justice of the European Union invalidated the EU Data Retention Directive on grounds that it excessively interfered with privacy, lacking proportionality and sufficient safeguards. The judgment was groundbreaking in establishing that indiscriminate retention of metadata amounts to a serious rights violation.[5]
In Zakharov v. Russia (2015), the European Court of Human Rights held that Russia’s surveillance framework, which permitted broad interception of communications through the SORM system, violated Article 8 of the ECHR.[6] The Court stressed that even without proof of actual interception, the mere existence of an intrusive system created a “reasonable likelihood” of interference with privacy.[7]
Another significant case was S and Marper v. United Kingdom (2008), where the ECtHR ruled that indefinite retention of DNA and fingerprints of acquitted individuals breached Article 8.[8] This decision emphasized that sensitive biometric data requires strict safeguards and cannot be stored indiscriminately.
In Bărbulescu v. Romania (2016), the ECtHR considered whether workplace monitoring of electronic communications infringed privacy. It ruled that even where an employer had informed an employee of monitoring, there still exists a reasonable expectation of privacy, thereby extending Article 8 protection to workplace communications.[9]
The United States Supreme Court, in Carpenter v. United States (2018), confronted the issue of cell-site location information (CSLI). It held that government acquisition of 127 days’ worth of CSLI without a warrant violated the Fourth Amendment.[10] This decision marked a shift away from the traditional “third-party doctrine,” recognizing that the pervasive and sensitive nature of location data necessitates stronger constitutional protection.[11]
Emerging Issues and Tensions
The most pressing contemporary challenges arise from surveillance regimes, artificial intelligence, commercial exploitation of data, and cross-border data flows. Metadata, though seemingly innocuous, can reveal intimate patterns when aggregated. Cases such as Tele2 Sverige have reaffirmed that indiscriminate retention of metadata is incompatible with privacy protections.[12]
Artificial intelligence and big data pose further risks. Profiling, algorithmic discrimination, and lack of transparency in automated decision-making threaten privacy and equality. Scholars such as Giovanni De Gregorio have emphasized that algorithmic data processing can produce hidden harms, reinforcing structural discrimination.[13] To address this, researchers have proposed Human Rights Impact Assessments (HRIAs) for AI systems, ensuring that ethical considerations are built into design and deployment.[14]
Commercial exploitation of data adds another dimension. Companies collect vast amounts of personal data for targeted advertising, analytics, and behavioural tracking. Often, consent is not meaningfully obtained but extracted through opaque terms of service. This undermines autonomy and leaves individuals with little real choice.
National security and law enforcement interests are frequently invoked to justify intrusive surveillance. Courts have accepted such aims as legitimate, but consistently stress that interference must be lawful, necessary, and proportionate, accompanied by adequate safeguards and judicial oversight. The tension between public security and individual rights remains unresolved in many jurisdictions.
Cross-border data flows raise questions of regulatory harmonization. The Schrems cases in the EU demonstrated that transatlantic frameworks like Safe Harbor and Privacy Shield did not provide adequate protection for EU citizens’ data when transferred to the United States. This highlights the urgent need for global consensus on data transfer standards.
Gaps, Challenges and Recommendations
The legal system faces several gaps in addressing digital privacy. One is the sheer pace of technological innovation compared to legislative or judicial response. Courts often struggle with the technical complexity of data systems, limiting their ability to fully address risks.
Another challenge lies in expanding the definition of privacy. Traditional approaches focused on physical searches and communication content, but modern risks include metadata, biometric identifiers, behavioural data, and neural data. These forms of information require explicit recognition within the scope of privacy.
Oversight and remedies remain inconsistent. Many surveillance systems operate in secrecy, with little transparency or accountability. Effective independent oversight, judicial review, and remedies for individuals are necessary to ensure compliance.
Stronger data protection laws are required globally. GDPR offers a model, but many jurisdictions lack comparable frameworks. Harmonization across borders is essential given the global nature of data flows.
Finally, embedding ethical standards and human rights assessments into technological design is critical. Human Rights Impact Assessments, particularly in AI systems, can help mitigate risks before deployment, ensuring privacy is respected by design rather than as an afterthought.
Conclusion
In the digital era, protection of privacy and data is indispensable to safeguarding human dignity, freedom, and autonomy. Legal norms and case law over the past decade have progressively affirmed that individuals do not surrender their rights simply by going online or using digital devices. Landmark decisions such as Digital Rights Ireland, Zakharov, Carpenter, and Puttaswamy have drawn important boundaries on state power, data retention, and the reach of the third-party doctrine. Yet, emerging technologies—artificial intelligence, biometrics, neural data, and surveillance infrastructures—present new challenges not fully addressed by existing law.
Safeguarding privacy in the digital age requires a multi-pronged approach: robust legislation, judicial vigilance, transparency, remedies, and ethical design. Only through vigilant legal, regulatory, and societal measures can data protection and privacy cease to be abstract ideals and instead become operative realities in our digitally interconnected world.
References
[1] Universal Declaration of Human Rights, G.A. Res. 217A (III), U.N. Doc. A/810 at 71 (Dec. 10, 1948), art. 12.
[2] International Covenant on Civil and Political Rights, opened for signature Dec. 16, 1966, 999 U.N.T.S. 171, art. 17.
[3] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 (India).
[4] Digital Rights Ireland Ltd v. Minister for Commc’ns, Marine & Nat. Res., Joined Cases C-293/12 & C-594/12, 2014 E.C.R. I-238 (Grand Chamber).
[5] Id. ¶¶ 48-57.
[6] Zakharov v. Russia, App. No. 47143/06, 2015-ECHR-XXXX (Grand Chamber).
[7] Id. ¶¶ 233-35.
[8] S & Marper v. United Kingdom, App. Nos. 30562/04 & 30566/04, 2008-ECHR-XXXX.
[9] Bărbulescu v. Romania, App. No. 61496/08, 2016-ECHR-754, ¶ 80.
[10] Carpenter v. United States, 585 U.S., 138 S. Ct. 2206 (2018).
[11] Id. at 2217-18.
[12] Tele2 Sverige & Watson, Joined Cases C-203/15 & C-698/15, EU:C:2016:970 (Grand Chamber)
[13] Giovanni De Gregorio, Algorithms, Bias, and Discrimination: The Hidden Harms of AI-Driven Data Processing, J. Tech. L. (forthcoming).
[14] Alessandro Mantelero & Maria Samantha Esposito, An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems, arXiv (July 30, 2024).




