HUMAN RIGHTS IN THE DIGITAL ERA: THE CENTRALITY OF DATA PROTECTION AND PRIVACY

Published On: December 19th 2025

Authored By: Neha Raulo
Madhusudan Law University

INTRODUCTION

The dawn of the digital era has precipitated a profound transformation in human interaction, commerce, and governance. This hyper-connected landscape, powered by the mass collection and processing of personal data, offers unprecedented opportunities for innovation and social progress. Yet it simultaneously poses one of the most significant challenges to fundamental human rights in the twenty-first century. The right to privacy, long recognised as a cornerstone of human dignity and autonomy, is being systematically eroded by state and corporate practices.

In the digital age, data protection is not merely a technical or regulatory issue but an essential component of the fundamental right to privacy. Privacy itself is a prerequisite for the enjoyment of other rights, including freedom of expression, association, and assembly. This article first explores the evolution of privacy as a human right within international law. It then analyses the dual threats posed by state surveillance and the commercial data economy. Subsequently, it examines the emergence of data protection regimes, particularly the European Union’s General Data Protection Regulation (GDPR), as a normative response. Finally, it addresses persistent and emerging challenges, concluding that a robust, rights-based approach to data protection is imperative for safeguarding human dignity in the digital future.

THE RIGHT TO PRIVACY: A FOUNDATIONAL HUMAN RIGHT

The right to privacy is not a novel concept born of the digital revolution. Its status as a fundamental human right is deeply entrenched in international law. The Universal Declaration of Human Rights (UDHR 1948) states in Article 12 that “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence.” This principle was codified in the International Covenant on Civil and Political Rights (ICCPR 1966), which obliges states to respect and ensure individuals are protected against “arbitrary or unlawful interference” with privacy, family, home, and correspondence and against “unlawful attacks” on honour and reputation.

The Human Rights Committee’s General Comment No. 16 (1988) clarified that privacy includes control over personal information and applies to “communications undertaken by modern technological means.” In other words, the collection of personal data itself constitutes an interference that must satisfy legality, necessity, and proportionality.

European jurisprudence has reinforced these propositions. In Peck v United Kingdom (2003), the European Court of Human Rights (ECtHR) held that the broadcast of CCTV footage identifying an individual violated ECHR Article 8. In Uzun v Germany (2010), the Court accepted the use of GPS surveillance in a serious crime investigation but stressed strict proportionality. Earlier, in Klass v Germany (1978), the ECtHR observed that “the mere existence of legislation which allows a system for the secret monitoring of communications entails a threat of surveillance for all those to whom the legislation may be applied.” Together, these cases trace how “correspondence” and “interference” expand with technology.

The United Nations has also adapted to the digital context. UN General Assembly Resolutions 68/167 (2013) and 69/166 (2014) recognised that rights enjoyed offline must also be protected online. Human Rights Council Resolution 28/16 (2015) reaffirmed privacy’s centrality and created the mandate of the Special Rapporteur on the Right to Privacy. Successive reports of the Office of the High Commissioner for Human Rights (OHCHR) have warned that bulk surveillance undermines autonomy and chills democratic participation. Taken together, these instruments and interpretations ground the argument that data protection is integral to the right to privacy in the digital age.

 THE DIGITAL THREAT LANDSCAPE: STATE SURVEILLANCE AND CORPORATE DATA EXPLOITATION

The vulnerability of privacy today stems from two interconnected forces: the expansive surveillance powers of states and the pervasive data-extractive models of corporations.

  • The Panoptic State: Mass Surveillance and Chilling Effects

The 2013 disclosures by Edward Snowden exposed vast programmes such as PRISM (United States) and TEMPORA (United Kingdom). These involved bulk interception of internet traffic and metadata, often without individualised suspicion or robust judicial oversight. Such practices mark a shift from targeted surveillance—traditionally subject to warrants and safeguards—to indiscriminate monitoring of entire populations.

The ECtHR has consistently stressed that surveillance measures must be “in accordance with the law” and proportionate. In Klass v Germany (1978), the Court acknowledged the threat posed by the mere existence of secret monitoring powers. In S and Marper v United Kingdom (2008), it held that indefinite retention of DNA profiles of non-convicted persons breached ECHR Article 8. Most recently, the Grand Chamber in Big Brother Watch and Others v United Kingdom (2021) concluded that aspects of the UK’s bulk interception regime violated Articles 8 and 10, clarifying essential safeguards for selection, search terms, and journalistic material.

The Pegasus spyware revelations underscore the risks of modern surveillance. Investigations reported deployments in India, Mexico, and Hungary, among others, to monitor journalists, activists, and opposition figures. Beyond legality, such measures produce a chilling effect: when people believe communications may be captured, they self-censor, withdraw from dissenting associations, or avoid seeking sensitive information—behaviour inimical to democratic society.

  • Surveillance Capitalism: The Corporate Commodification of Personal Data

In parallel, the rise of “surveillance capitalism”—popularised by Shoshana Zuboff—poses systemic risks. Dominant platforms such as Google, Meta (Facebook), and TikTok rely on extensive harvesting of personal data to fuel behavioural advertising and content personalisation. The collection is often opaque; consent is buried in dense terms of service that function as contracts of adhesion. Aggregated datasets reveal deeply private information—political opinions, health status, or sexual orientation—through inference, even when users never disclosed such data.

The Cambridge Analytica scandal (2018) showed how Facebook data could be weaponised to manipulate electoral preferences, challenging the integrity of democratic processes. Meanwhile, Clearview AI’s scraping of billions of facial images to build a searchable facial-recognition database illustrates how corporate practices can erode anonymity in public life. Regulatory scrutiny of platforms like TikTok—including concerns about data access under foreign law—highlights the jurisdictional complexity of governing global actors.

THE NORMATIVE RESPONSE: DATA PROTECTION AS A HUMAN RIGHT

Privacy protects individuals from unwarranted interference; data protection provides the structure of control, accountability, and redress over personal information. The international community increasingly recognises that both are necessary to safeguard dignity in a datafied society.

  • European Union Leadership

The Charter of Fundamental Rights of the European Union (2000) enshrines the right to respect for private and family life (Article 7) and a distinct right to the protection of personal data (Article 8). Article 8 requires processing to be fair, for specified purposes, based on consent or another legitimate ground, and guarantees rights of access and rectification.

The General Data Protection Regulation (GDPR 2016/679) operationalises these principles. Core features include:

  • Lawfulness, fairness, transparency: processing must have a legal basis and be understandable to data subjects.
  • Purpose limitation and data minimisation: collect only what is necessary for explicit, legitimate purposes.
  • Accountability: controllers must demonstrate compliance (documentation, DPIAs, governance).
  • Individual rights: access and rectification, erasure (Article 17—“right to be forgotten”), portability (Article 20), and protection against solely automated decisions (Article 22)—a qualified right subject to limited exceptions (e.g., contract necessity, law, explicit consent) and safeguards such as human review and contestation.
  • Privacy by design and by default (Article 25).
  • Cross-border transfers (Articles 44–50), including adequacy decisions, Standard Contractual Clauses (SCCs), and supplementary measures.
  • Enforcement (Article 83): significant administrative fines (up to €20 million or 4% of global annual turnover).

The GDPR’s extraterritorial reach has made it a global benchmark, shaping reforms beyond Europe. It influenced the California Consumer Privacy Act (CCPA 2018), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) practice, Brazil’s LGPD (2018), and India’s Digital Personal Data Protection Act (DPDP 2023).

  • International and Regional Instruments

The Council of Europe’s Convention 108 (1981) was the first binding international treaty focused on personal data. Its modernised instrument, Convention 108+ (2018), aligns with GDPR-style principles and has traction beyond Europe, reinforcing its role as a global reference point.

In Latin America, Brazil’s LGPD Article 1 explicitly protects “the fundamental rights of freedom, privacy, and the free development of the personality of the natural person.” In Africa, the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention, 2014)—in force since 2023—remains the only binding regional data-protection treaty outside Europe, granting data-subject rights to information, access, rectification, objection, and erasure and requiring independent authorities.

Across Asia-Pacific, approaches vary. In Justice K.S. Puttaswamy (Retd) v Union of India (2017), the Supreme Court recognised privacy as a fundamental right under Article 21 of the Constitution, laying constitutional foundations for the DPDP Act (2023). Japan, South Korea, and Singapore operate comprehensive regimes aligned with international standards, while the APEC Privacy Framework (2005) supports cross-border cooperation through non-binding guidelines.

PERSISTENT AND EMERGING CHALLENGES

  • The Inadequacy of Consent and Power Asymmetries

While the GDPR and many national laws elevate consent as a legal basis, real-world consent is often illusory. Users confront “take-it-or-leave-it” platforms and complex notices they cannot meaningfully negotiate. The asymmetry of bargaining power, information overload, and interface design patterns (so-called “dark patterns”) undermine voluntariness, specificity, and informed choice—especially for vulnerable groups such as children, older persons, and individuals dependent on essential digital services.

  • Emerging Technologies: AI, Biometrics, and the Internet of Things

Artificial Intelligence (AI) and machine learning enable opaque, consequential decisions about individuals—credit scoring, hiring, welfare eligibility, predictive policing. GDPR Article 22 provides an anchor, but difficult questions remain: what counts as “solely” automated, how to assess discrimination risk, and how to ensure algorithmic transparency without revealing trade secrets. The proposed EU Artificial Intelligence Act targets “high-risk” systems with obligations for risk management, data governance, and human oversight; debates continue over restrictions on biometric identification and live facial recognition in public spaces.

Biometric technologies—facial templates, fingerprints, iris scans, DNA profiles—are uniquely sensitive because they are permanent identifiers. Absent strict limits, the spread of live facial recognition risks normalising identity checks at scale and chilling protest and association. Meanwhile, the Internet of Things (IoT) embeds sensors and connectivity into homes, workplaces, vehicles, and cities, creating ambient data trails that are difficult for individuals to monitor or control.

  • Cross-Border Data Flows and Jurisdictional Conflicts

The global internet complicates enforcement. In Schrems I (2015), the Court of Justice of the European Union (CJEU) invalidated the EU–US Safe Harbor for failing to guarantee essentially equivalent protection in the presence of US surveillance laws. In Schrems II (2020), the CJEU struck down Privacy Shield but confirmed the validity of Standard Contractual Clauses (SCCs), subject to case-by-case assessments and “supplementary measures” to ensure equivalent protection. These judgments expose structural tension between rights-based regimes (EU) and sectoral or intelligence-led approaches (US), leaving organisations to navigate complex transfer impact assessments and technical safeguards.

  • National Security and Authoritarian Uses

Many governments invoke national security to widen interception powers, mandate data retention, or require decryption assistance. These measures must still satisfy ICCPR Article 17 and ECHR Article 8 tests of legality, necessity, and proportionality, with effective oversight and remedies. Debates over India’s Aadhaar biometric identity infrastructure have raised concerns about function creep and exclusion; in Puttaswamy, the Supreme Court cemented privacy as a constitutional baseline. In Xinjiang, China, reports of mass biometric monitoring and profiling show how digital tools can facilitate discrimination and repression. These examples test the universality of privacy guarantees and the practical capacity of international law to restrain state power.

CONCLUSION

The digital era has not diminished the relevance of privacy; it has made it indispensable. Mass surveillance and corporate data exploitation threaten autonomy, equality, and democratic self-government. The normative response—from UDHR Article 12 and ICCPR Article 17 to the GDPR, LGPD, DPDP Act, Convention 108+, and the Malabo Convention—increasingly recognises that data protection is essential to right to privacy.

Yet law remains a work in progress. Consent is frequently a legal fiction; emerging technologies outpace regulation; cross-border transfers strain the coherence of protections; and many regulators lack the resources to enforce effectively. Meeting these challenges requires: (i) stronger, coordinated enforcement with meaningful penalties; (ii) international harmonisation of minimum standards and interoperable transfer tools; (iii) embedding privacy by design and by default across systems; (iv) rigorous human-rights impact assessments for surveillance and AI; and (v) sustained judicial scrutiny to maintain legality, necessity, and proportionality.

Ultimately, protecting human rights in digital age requires reaffirming a basic principle: individuals—not states or corporations—are rightful sovereigns of their personal data. Privacy is not merely about personal dignity; it is a foundation of democracy, the rule of law, and public trust. The future of human freedom depends on ensuring that privacy and data protection remain central pillars of the human-rights framework.

REFERENCES 

  • Universal Declaration of Human Rights, GA Res 217 A (III), UN Doc A/810 (10 December 1948) art 12.
  • International Covenant on Civil and Political Rights, 999 UNTS 171 (16 December 1966) art 17.
  • Human Rights Committee, ‘General Comment No 16: Article 17 (Right to Privacy), The Right to Respect of Privacy, Family, Home and Correspondence, and Protection of Honour and Reputation’ UN Doc HRI/GEN/1/Rev.9 (Vol I) (1988).
  • UNGA Res 68/167, UN Doc A/RES/68/167 (18 December 2013); UNGA Res 69/166, UN Doc A/RES/69/166 (18 December 2014).
  • Human Rights Council Res 28/16, UN Doc A/HRC/RES/28/16 (1 April 2015).
  • European Convention on Human Rights (1950) art 8.
  • Charter of Fundamental Rights of the European Union [2000] OJ C364/1 arts 7–8.
  • Treaty on the Functioning of the European Union [2012] OJ C326/47 art 16.
  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation) [2016] OJ L119/1 (esp rec 1; arts 17, 20, 22, 25, 44–50, 83).
  • Council of Europe, Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (adopted 28 January 1981, entered into force 1 October 1985) ETS No 108; Protocol amending the Convention (Convention 108+, adopted 18 May 2018).
  • Klass and Others v Germany (1978) 2 EHRR 214
  • S and Marper v United Kingdom (2008) 48 EHRR 50.
  • Uzun v Germany App no 35623/05 (ECtHR, 2 September 2010)
  • Big Brother Watch and Others v United Kingdom App nos 58170/13, 62322/14 and 24960/15 (ECtHR, Grand Chamber, 25 May 2021).
  • CJEU, Maximillian Schrems v Data Protection Commissioner (Schrems I) Case C-362/14, ECLI:EU:C:2015:650; CJEU, Data Protection Commissioner v Facebook Ireland Ltd and Maximillian Schrems (Schrems II) Case C-311/18, ECLI:EU:C:2020:559.
  • Carpenter v United States 585 US ___ (2018).
  • Lei Geral de Proteção de Dados (Brazil, Law No 13.709 of 2018) art 1.
  • African Union, Convention on Cyber Security and Personal Data Protection (Malabo Convention, adopted 27 June 2014, entered into force 8 June 2023).
  • Justice KS Puttaswamy (Retd) v Union of India (2017) 10 SCC 1.
  • Human Rights Watch, China’s Algorithms of Repression: Reverse Engineering a Xinjiang Police Mass Surveillance App (2019).
  • Carole Cadwalladr and Emma Graham-Harrison, ‘Revealed: 50m Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach’ The Guardian (London, 18 March 2018).
  • Kashmir Hill, ‘The Secretive Company That Might End Privacy as We Know It’ New York Times (New York, 18 January 2020).
  • Shoshana Zuboff, The Age of Surveillance Capitalism (PublicAffairs 2019).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top