Published on 19th June 2025
Authored By: Treasy Nilopher
Sastra Deemed University Tanjore
INTRODUCTION
In the machine-learning era and the age of algorithmic decision-making, the issue of who controls personal data and how they exercise it has never been more pressing. From lending apps for money to biometric identity networks, automated judgments now determine everything from creditworthiness to welfare entitlement [1]. As algorithms increasingly shape individual lives, the legal right to challenge and comprehend these black box systems, often described as the “right to explanation,” has become a critical part of contemporary data protection regimes.
The European Union’s General Data Protection Regulation (GDPR) enshrines this right in Article 22, protecting individuals against decisions made solely by automated processing, especially where such decisions have legal or similarly impactful effects[2]. By contrast, India’s Digital Personal Data Protection Act, 2023 (DPDP Act), even as it has a forward-looking approach to framing consent and purpose limitation, is visibly silent on algorithmic accountability[3].
This silence is noteworthy in a country where data-driven welfare schemes such as Aadhaar already depend considerably on algorithmic processes that have resulted in welfare exclusions, ration denials, and biometric mismatches involving actual human costs. Indeed, civil liberties organizations have already pointed out that algorithmic transparency in government schemes results in “technocratic exclusions” that are beyond legal challenge because of the absence of explain ability frameworks. This paper conducts a comparative examination of the GDPR and India’s DPDP Act, highlighting the presence and deliberate absence of accountability mechanisms around algorithmic decision-making. It contends that the absence of a counterpart to GDPR’s Article 22 from the DPDP Act is no legislative error but a deliberate legal tactic, one that seems to favor the business autonomy of influential tech players at the expense of the informational entitlements of citizens. Particular attention will be drawn to whether DPDP’s imprecision is being used as a regulatory soft-landing for massive multinational corporations; at the same time, Indian startups must meet more demanding standards, such as the GDPR, if they are to remain competitive on the global stage. From this perspective, algorithmic explain ability becomes not just a legal protection but a foundational space for digital justice, data sovereignty, and fair innovation.
GDPR AND THE RIGHT TO EXPLANATION
The General Data Protection Regulation (GDPR) of 2016, coming into force as of May 2018, is the globe’s most comprehensive data protection framework. Perhaps its most contentious aspect is the right to explanation, a protection enshrined mainly in Article 22, which prohibits decisions “based solely on automated processing” having legal or similarly important effects on individuals. Although the phrase “right to explanation” is not explicitly mentioned in the text, its essence comes out in Recital 71, which explains that people should be able to “obtain human intervention, express their point of view, and contest the decision” when subjected to automated decision-making. The Regulation therefore places constraints on algorithmic obscurity, particularly in areas such as credit scoring, employment, insurance, and predictive policing, where data-driven decisions can significantly impact human lives.
A paradigmatic example of this principle was seen in Case Peter Nowak v. Data Protection Commissioner, where the European Court of Justice highlighted individuals’ rights to inspect even examiner comments as personal data, further supporting the notion that decision-making processes need to be transparent to audit[4]. This judicial attitude has raised the GDPR above procedural compliance, making it a regime grounded in informational self-determination, a fundamental European value reasserted by the EU Charter of Fundamental Rights.
Despite the philosophical robustness of Article 22, it has not escaped criticism. Scholars such as Sandra Wachter and Brent Mittelstadt have pointed out the vagueness in defining what a “solely automated decision” actually is and whether the rule requires explanation with substance or only procedural notice. In reality, a lot of corporations avoid complete transparency by adding minimal human intervention, i.e., rubber-stamping, to stay outside Article 22’s ambit. This has provoked demands for more robust secondary legislation, such as more explicit mandates in the forthcoming EU Artificial Intelligence Act (AI Act), aimed at filling algorithmic risk categorization gaps and audit requirements [5]. Nevertheless, even in its present state, GDPR is an important step toward recognizing the asymmetries of power that accompany algorithmic rule. It legitimizes a citizen’s right to know and to challenge how automated systems impact their dignity, prospects, and rights. This is in direct contrast to India’s data regime, which, as will be explored in the subsequent section, provides no such bulwark against algorithmic opacity.
SILENCE REGARDING ALGORITHMIC ACCOUNTABILITY
India’s Digital Personal Data Protection (DPDP) Act, 2023, was welcomed as a long-awaited step towards protecting digital rights. However, on closer reading, it betrays a strategic silence regarding algorithmic decision-making. Nowhere in the document does it use the terms “automated processing,” “algorithmic decision,” or “profiling”. The legislation is based on broad principles of consent, purpose limitation, and notice, but it does not recognize the key issue of automated systems making decisions with real-world implications. This vacuum stands in sharp relief when compared with India’s increasing dependence on AI-driven public infrastructure. The Aadhaar network, DigiLocker, law enforcement facial recognition, and predictive policing software have all proliferated with minimal to no legal restrictions on how these systems operate or if individuals can challenge the decisions that these systems generate [6]. In this void, algorithmic invisibility is a structural aspect of governance instead of a bug, an ecosystem where AI systems function unencumbered, and the people have no redress.
Detractors contend that this is not by chance but rather a carefully designed system to benefit state and corporate convenience. As per legal academic Vrinda Bhandari, the Act “privileges ease of compliance for data fiduciaries over substantive data rights for users,” specifically by leaving out rights like data portability, algorithmic explanation, and the right to object to automated profiling. The government maintains sweeping discretion through executive rule-making powers under Sections 40 and 42, reserving important safeguards to be fixed by future delegated legislation. This also undermines legal certainty, particularly in areas of AI regulation.
Without Article 22-type protections, India’s cyber citizens are essentially denied any effective right to challenge or audit AI-driven decisions. This is particularly ominous given the increasing use of AI in credit scoring, insurance, school admissions, and public distribution networks, where even slight biases in algorithms can inflict disproportionate damage on already vulnerable groups. Although the government argues that flexibility is essential for innovation, this justification has opened the floodgates of regulatory arbitrage, where global tech firms can take advantage of India’s poor protections, while Indian startups in the EU have to bear the enormous costs of GDPR compliance. Therefore, rather than promoting innovation, the DPDP Act is potentially establishing a two-tiered system: one in which multinational companies derive competitive benefits, and the other in which Indian citizens continue to be digitally exposed.
DIGITAL COLONIALISM RELOADED- LOBBYING, LOOPHOLES, AND THE STRATEGIC SILENCE OF THE DPDP ACT
The DPDP Act’s glaring lacunae, particularly concerning algorithmic accountability, cannot be waved away as legislative fiat. Rather, they reflect a more sinister geopolitical and economic design: an intentional ambiguity intended to make India remain a magnet for international tech behemoths, even if at the cost of citizen rights at the digital convenience altar. Here, the DPDP Act is not merely a flaccid law; it is a tactical legal void intended to promote corporate complacency at the expense of constitutional security.
The Vague by Design Argument: Protecting Multinational Interests?
In contrast to the GDPR, whose provisions like Article 22 are well laid out to safeguard against automated decision-making, India’s DPDP Act has no such comparable rights, thus relaxing the compliance burden for Big Tech corporations doing business in India. As policy analysts like Malavika Raghavan point out, the Act follows a “consent-first, rights-last” approach, which has previously favored data-thirsty corporations that abhor transparency and accountability[7].
This design becomes more deliberate on reading the track of corporate lobbying. During the lead-up to the final version, industry organizations such as NASSCOM and the US-India Business Council had made numerous submissions calling for a “light-touch” regime of regulation that wouldn’t “hamper innovation”. In addition, leaked drafts of previous versions of the bill revealed redlined corporate changes advocating for weaker data localization, ambiguous definitions, and sweeping exemptions for “public interest” processing demands that ultimately found their way into the final bill.
Aadhaar’s Silent Algorithm: A Case Study in Legal Evasion
The Aadhaar project provides a chilling example of what occurs when automated systems are allowed to function without legal oversight. Established as a biometric identity platform, Aadhaar’s authentication systems have resulted in systematic “algo-denials”, systematic rejection of food rations, pensions, and health benefits for fingerprint mismatches, technical glitches, or server disconnects. Nevertheless, despite these well-documented harms, the DPDP Act would not entail disclosure or redress since it includes no provisions insisting on explanation or human oversight in automated determinations.
Worse still, the exemptions under Clause 17(2) of the law give a wide berth to the government from adhering to key principles such as purpose limitation and storage minimization in case “necessary for national interest”[8]. This means Aadhaar-style systems impacting more than a billion individuals are free from user challenge or algorithmic audit a perk that wouldn’t pass muster under GDPR scrutiny in Europe.
Startups Strangled, Giants Unfazed: The Two-Tiered Reality
Whereas MNCs such as Meta, Google, and Amazon function with few frictions under the DPDP regime, Indian AI startups for export markets bear an uneven burden. Startups have to adhere to GDPR for EU client access, for which they spend lakhs on legal audits, data mapping, and impact assessments. Foreign firms, meanwhile, continue to take advantage of DPDP’s lenient framework by actually paying less for regulatory costs but earning from India’s large user base.
This uneven playing ground perpetuates data colonialism, where domestic innovation is taxed by foreign law, but foreign tech giants are not held accountable under Indian law. In so doing, the DPDP Act hypocritically exports sovereignty and imports surveillance capitalism
CONCLUSION
The comparison of the GDPR and India’s DPDP Act is more than a technical legal comparison; it is a mirror to India’s data future. On the one hand, GDPR, with all its shortcomings, represents a rights-based digital regime: it reiterates the control of the user over data and demands transparency, explanation, and redress from algorithms. Conversely, the DPDP Act of 2023 discloses a strategically designed silence, well-timed to facilitate ease of regulation for governments and Big Tech in tandem, while sacrificing digital autonomy for citizens.
This silence is not benign. It positively facilitates a new form of “algorithmic colonialism,” in which Western systems such as GDPR impose costly compliance regimes on Indian startups, while global firms take advantage of India’s softer-regulation hand. The outcome is a two-tiered system: one that penalizes innovation at home and incentivizes opacity overseas.
The intentional vagueness of the DPDP Act, especially in automated decision-making, is not simply a gap in legislation; it’s a deliberate exercise of legal design. The Aadhaar case, with its algorithmic denials veiled in obscurity and lack of legal accountability, illustrates how catastrophic this silence can be in reality. And when laws allow such secrecy in areas such as welfare, finance, and health, the digital divide is no longer a matter of access, it becomes one of rights, survival, and dignity.
A decolonized data protection legislation in India should not copy the West but draw inspiration from its values while exercising local agency. It involves inscribing clear-cut rights to algorithmic explanation, contestation, and human supervision, particularly in situations where automation influences livelihoods. It also involves filling the loopholes that enable technology giants to reap the rewards of legal uncertainty while Indian citizens and startups bear the cost. If India is to become a digital democracy, not merely a digital economy, it has to enshrine safeguards that put individuals, not platforms, at the forefront of the data economy. Doing so, it can not only maintain constitutional dignity but also fend off the gravitational pull of neo-colonial regulatory capture.
REFERENCES
[1] https://foundation.mozilla.org/en/insights/
[2] Article 22, GDPR
[3] https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
[4] Peter Nowak v Data Protection Commissioner, C-434/16.
[5] https://artificialintelligenceact.eu/the-act/
[6] https://www.oxfamindia.org/knowledgehub/workingpaper/india-inequality-report-2022-digital-divide
[7] https://carnegieendowment.org/people/malavika-raghavan?lang=en
[8] https://prsindia.org/billtrack/digital-personal-data-protection-bill-2023