Published On: April 14th 2026
Authored By: Ankita Milind Gaikwad
Vasantdada Patil Prathishthan Law College, Sion
Abstract
A rumour once moved slowly through a crowded market; today, a single forwarded message can travel across India in seconds. India’s digital transformation has expanded access to information and strengthened democratic participation, but it has also allowed misinformation to spread at unprecedented speed. The Reuters Institute Digital News Report 2025 notes that millions now rely on social media as a primary source of news.[1] In this environment, verified information competes with viral falsehoods. Fake medical cures, communal rumours, and electoral deepfakes have caused panic, social unrest, and, perhaps most dangerously, a gradual erosion of public trust in the institutions upon which democracy depends.
This paper explores how misinformation operates across political, economic, environmental, and health domains in India, evaluates the constitutional and regulatory framework addressing it, and argues that lasting solutions require both accountable governance and an informed citizenry — protecting democratic freedoms while strengthening society against deception.
Keywords: Digital Misinformation, Critical Thinking, Article 19, Information Integrity, Platform Accountability, Algorithmic Amplification, Digital Literacy, Bharatiya Nyaya Sanhita 2023
I. Introduction
This challenge is not only technological; it is deeply constitutional. Article 19(1)(a) of the Constitution of India guarantees freedom of speech and expression, while Article 19(2) permits reasonable restrictions in the interests of public order, morality, and national security. The digital age has made this balance more delicate than ever. Strong government control risks censorship and a chilling effect on dissent, yet weak regulation allows misinformation to quietly distort democratic debate. The real dilemma lies in confronting falsehoods without weakening the freedom that sustains democracy itself.
Recent policy efforts reflect this tension. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introduced by the Ministry of Electronics and Information Technology,[2] seek to regulate harmful online content and establish fact-checking mechanisms. However, concerns about executive overreach and “State-determined truth” have led to constitutional challenges, including Kunal Kamra v. Union of India (2023), currently under consideration before the Supreme Court. At the same time, algorithm-driven platforms reward sensational content over careful reporting, turning private technology companies into powerful yet largely unaccountable gatekeepers of public discourse.
Importantly, law alone cannot solve the problem. Removing harmful posts may limit immediate damage, but it does not equip citizens to recognise manipulation. A democracy resilient to misinformation must invest in critical thinking and media literacy, enabling individuals to question sources, detect bias, and make informed judgments. Schools, universities, and public awareness initiatives therefore play a vital role alongside legal regulation.
II. Objectives
1. Structural accountability: To examine digital misinformation not merely as a failure of individual speech, but as a structural failure of digital architecture — including algorithmic amplification, engagement-driven monetisation, and platform design — thereby shifting accountability from users to systems and recognising the role of technological power in shaping public discourse.
2. Constitutional boundaries: To investigate whether the State’s anti-misinformation measures risk crossing the constitutional boundary between legitimate preventive regulation and disguised censorship, and where that line must be drawn to protect democratic discourse without weakening necessary safeguards.
3. Normative framework: To propose a normative framework that shifts the national conversation from speech restriction to information integrity governance — offering concrete recommendations for legislative reform, platform accountability, and citizen empowerment so that regulation strengthens democracy instead of restricting it.
III. Statement of the Problem
We live in a time when a 30-second video can sway an election, a single WhatsApp forward can wipe out a family’s savings, and a fake cure can cost a life. This is India in 2024–25. The World Economic Forum ranked India highest for misinformation risk in 2024.[3] With nearly 900 million internet users, outrage-driven algorithms, and citizens rarely trained to verify what they see, the crisis is no longer theoretical.
The problem is not missing laws. The Information Technology Act, the Bharatiya Nyaya Sanhita, the Representation of the People Act, and Securities and Exchange Board of India regulations all exist — but they arrive after the damage. Deepfakes spread through the 2024 elections, finfluencers trapped investors before action came, and a 1954 health statute required Supreme Court contempt proceedings in 2025 before it was enforced. Indian law is present, but consistently late.
The deeper wound is civic vulnerability. Many first-time voters faced misinformation without tools to test it; families trusted false cures because no school or system taught them to question digital claims. This paper studies that gap between law on paper and citizens left unprotected.
IV. Hypothesis
H⁰: The existing legal framework, including the IT Act 2000 and BNS 2023, is sufficient to address digital misinformation independently. Critical thinking has no significant role.
H¹: The existing framework is insufficient alone. Critical thinking is essential to meaningfully reduce harm across political, economic, environmental, and health domains.
H²: Digital misinformation affects all citizens equally regardless of background or literacy.
H³: Misinformation disproportionately harms vulnerable groups — first-time voters, rural populations, women, gig workers, and financially inexperienced investors — who lack verification tools and institutional access.
Doctrinal Rationale: India has substantial statutory tools, yet misinformation escalates. The Patanjali Ayurved controversy revealed how a statute enacted in 1954 remained unenforced until the Supreme Court personally initiated contempt proceedings in 2025.[4] That gap between law on paper and law in practice is this paper’s central concern. Law and literacy must function as partners, not alternatives.
V. Methodology
A paper arguing against misinformation must itself be trustworthy. That means being transparent about how it was researched, which sources it relied upon, and what it genuinely proves. This paper adopts three complementary layers — each chosen deliberately, each doing something the others cannot.
1. Doctrinal analysis: India has responded to misinformation through legislation touching elections, digital platforms, consumer protection, public health, and the environment. But this paper does not simply list these laws — it asks harder questions: Were they enforced? Did courts interpret them boldly? Did they actually prevent harm, or merely document it after the fact?
2. Empirical research: This layer is drawn not from original fieldwork but from institutions whose credibility is already established: the World Economic Forum,[3] UNESCO,[5] the Physicians Foundation,[6] SEBI,[7] and the Advertising Standards Council of India.[8] Every statistic has been cross-verified across independent sources because a paper arguing against misinformation cannot afford to spread any itself.
3. Comparative analysis: The EU Digital Services Act 2022[9] is examined not as a model to copy, but as a mirror reflecting what India’s framework currently lacks and what a genuinely preventive regulatory approach could look like.
Together, these three layers build one argument: India’s legal response to misinformation, though substantial, consistently arrives too late — and an informed, critically thinking citizenry is not a supplement to law but its most essential partner.
VI. Data Analysis
A. Political Misinformation
During the 2024 election season, many voters encountered shocking clips on WhatsApp purportedly showing leaders confessing to crimes or making hateful speeches. Several of these were later confirmed to be deepfakes. The Election Commission of India[10] had to intervene and warn political parties that synthetic media and fabricated opinion polls were misleading voters and disturbing public order. The harm was not abstract — it created fear, spread communal suspicion, reduced voter turnout in certain areas, and, most dangerously, made citizens doubt the fairness of elections and democratic institutions.
Indian law already has tools to address this. Under Section 123 of the Representation of the People Act, 1951, false statements about candidates can be treated as corrupt electoral practices. At the digital level, Section 69A of the Information Technology Act, 2000 allows the government to block content threatening public order. In X Corp. v. Union of India,[11] the court accepted the State’s power to block harmful misinformation but insisted on transparency and due process. In Ashwini Kumar Upadhyay v. Union of India,[12] the Supreme Court recognised fake news as a serious threat to democracy and called for stronger safeguards.
Yet law alone cannot fix political misinformation. Voters must pause before forwarding content, verify facts through platforms like Alt News, and critically question emotionally charged material that circulates during elections. Real solutions require three steps together: clear legal accountability, platform responsibility for labelling synthetic media, and public digital literacy programmes.
B. Economic Misinformation
In 2024, the Securities and Exchange Board of India[7] exposed hundreds of WhatsApp and Telegram groups in which unregistered “finfluencers” spread fake earnings reports and insider tips to inflate penny stock prices. Retail investors — many of them first-time traders from Tier-2 and Tier-3 cities — lost crores before authorities intervened.
The harm is immediate and structural. False stock tips drain savings, forged circulars weaken trust in institutions like the Reserve Bank of India, and rumours about GST changes or bank collapses can trigger panic withdrawals and market instability. Gig workers, small business owners, and financially inexperienced users suffer the most, turning misinformation into a question of economic justice.
Indian law responds through layered regulation. Section 12A of the SEBI Act and the PFUTP Regulations criminalise rumour-based market manipulation; BNS 2023 Sections 318, 336, and 340 apply to cheating and forged financial documents; the RBI Digital Lending Guidelines[13] curb misleading loan-app claims; and the Consumer Protection Act penalises deceptive financial advertising. Courts have upheld SEBI’s authority where false information demonstrates fraudulent intent or measurable market impact, carefully balancing Article 19 speech protections with investor protection. Yet law works only with informed citizens — verifying tips through SEBI filings, cross-checking RBI notices, recognising “guaranteed return” scams, and reporting fraud through SCORES or cybercrime portals are acts of essential civic self-defence.
C. Environmental Misinformation
When Delhi spent ₹22.9 crore on cloud-seeding trials in late 2025,[14] headlines celebrated a miracle cure for smog. Weeks later, a quiet rebuttal in Nature pointed out that cleaner air had resulted from shifting winds, not human intervention. That is how environmental misinformation works in India — it rarely shouts; it smiles politely, backed by press conferences, advertisements, or viral “expert” clips.
India’s vulnerability is real. A survey by CarbonCopy[15] found that a majority of Indians believe at least one false climate claim. During the 2024 Wayanad landslides, fake WhatsApp alerts triggered panic evacuations that slowed rescue teams. Even in daily life, the Advertising Standards Council of India[8] found that most “green” advertisements exaggerated or fabricated sustainability claims — leaving families paying more for products that provided no environmental benefit.
Law is beginning to respond. Greenwashing Guidelines issued in 2024[16] now demand proof behind environmental promises, while Section 54 of the Disaster Management Act punishes false disaster warnings. Courts have also linked environmental truth with the right to life under Article 21, recognising that misinformation can delay climate action and endanger public health. But no statute can cleanse polluted information alone — citizens must learn to question miracle solutions, verify disaster alerts through the India Meteorological Department, and read beyond viral headlines.
D. Health Misinformation
It takes one viral WhatsApp forward to undo years of public health work. In 2020, when Patanjali Ayurved launched Coronil — marketed boldly as a hundred per cent cure for COVID-19 — millions trusted it. Not because they were foolish, but because the claim arrived backed by a nationally recognised public figure, an implicit endorsement from a Union Ministry, and the quiet desperation of a pandemic. The Indian Medical Association took the fight to the Supreme Court, and what followed became a landmark moment in India’s legal history: advertisements banned, contempt proceedings initiated personally against Baba Ramdev and Acharya Balkrishna, and — most significantly — a March 2025 Supreme Court directive[17] ordering every state government to establish citizen grievance mechanisms under the Drugs and Magic Remedies Act, 1954, so that ordinary people could directly report misleading health claims.
The numbers behind this are not abstract. A 2024 report by First Check and DataLEADS[18] identified health misinformation — around vaccines, cancer, and reproductive health — as the single largest category of fake content on Indian social media. A 2025 Physicians Foundation survey[6] found that 86% of doctors worldwide reported a steep rise in patients influenced by health misinformation, with India consistently among the worst-affected nations.
The legal architecture exists — the Drugs and Magic Remedies Act, 1954, the Consumer Protection Act, 2019, the NMC Act, and BNS Section 272. What Indian Medical Association v. Union of India (2022–2025) exposed, uncomfortably, is that laws seven decades old go unenforced without judicial compulsion. The real antidote is twofold: courts willing to act, and citizens trained to question. Verifying a practitioner’s credentials on the NMC registry, cross-checking remedies against ICMR guidelines, and reporting false health claims to the CCPA are not technical skills — they are acts of self-defence in the digital age.
VII. Suggestions
I. Enact a Standalone Information Integrity Act
India’s response to misinformation is scattered across the Information Technology Act, the Bharatiya Nyaya Sanhita, sector-specific regulations of the Securities and Exchange Board of India and the Central Consumer Protection Authority, and judicial directions issued case by case. This fragmentation leaves enforcement gaps that misinformation readily exploits. A dedicated Information Integrity Act — carefully calibrated to Article 19 — should consolidate platform obligations, define misinformation with legal precision, establish a fast-track adjudicatory tribunal with technical expertise, and create a unified grievance mechanism accessible to every citizen, regardless of geography or literacy.
II. Mandate Synthetic Media Labelling by Statute
The advisory issued by the Election Commission of India during the 2024 elections was important, but advisories lack enforceable strength. Every AI-generated image, audio clip, or video distributed on Indian platforms should carry a visible, machine-readable disclosure label. This requirement must be incorporated into the IT framework through statutory amendment, with platform liability attached to non-compliance. The EU Digital Services Act[9] transparency framework demonstrates that platform accountability and free speech can coexist — India’s statutory framework must now reflect the same balance.
III. Recognise Digital Literacy as a Constitutional Obligation
The Supreme Court’s interpretation of Article 21 has expanded to include the right to education and health. It is time to recognise digital literacy as equally necessary for meaningful free speech. The National Education Policy 2020 acknowledges digital literacy but stops short of making it universal and assessable. Critical thinking education — covering source verification, algorithm awareness, and deepfake detection — must become a compulsory part of school and legal education.
IV. Regulate Platforms as Information Fiduciaries
Indian law still treats digital platforms as intermediaries entitled to safe harbour under Section 79 of the IT Act. But platforms that algorithmically amplify content and monetise engagement are not neutral conduits — they shape public understanding at scale. India should move toward an information fiduciary model, requiring platforms to act in users’ informational interests, disclose amplification criteria to regulators, and bear proportionate responsibility for systematically amplifying verified misinformation.
V. Establish a National Misinformation Response Protocol
Events like the Wayanad landslides and Delhi’s cloud-seeding controversy demonstrated that India lacks a coordinated mechanism to respond to misinformation during crises. A National Misinformation Response Protocol — led jointly by the Ministry of Electronics and Information Technology, the Press Information Bureau, and sector regulators — should activate automatically during elections, disasters, and health emergencies. Real-time coordination between platforms, fact-checkers, and government communicators is essential, because in the digital age, speed determines whether truth survives.
VIII. Conclusion
The lie that once travelled slowly now travels instantly — and India’s legal system, built for a slower world, is still catching up. This paper has examined misinformation across four domains and arrived at one unavoidable conclusion: the harm is real, the laws exist, and yet the damage continues. That contradiction is not accidental — it is structural.
Every legal intervention examined — from SEBI’s action against finfluencers to the Supreme Court’s contempt proceedings against Patanjali Ayurved — arrived after irreversible harm had already occurred. Law that responds only after damage is not protection; it is documentation of failure.
The deeper crisis is constitutional. If Article 19(1)(a) guarantees free speech and Article 21 guarantees a dignified life, then a citizen manipulated by algorithmic misinformation into voting against their conscience, losing their savings to a fake tip, or consuming a dangerous cure has been robbed of both — not by the State alone, but with the State’s silence. And that silence is itself a constitutional failure.
Critical thinking is not the soft alternative to law — it is law’s only reliable partner. Without an informed, questioning citizenry, even the strongest statute becomes powerless in practice. The warning this paper leaves is simple: a democracy that does not teach its citizens to think critically will eventually find that no law can save it from the consequences of their manipulation.
References
[1] Reuters Institute for the Study of Journalism. (2025). Digital News Report 2025. University of Oxford. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025
[2] Ministry of Electronics and Information Technology. (2021). Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Government of India. https://meity.gov.in
[3] World Economic Forum. (2024). Global Risks Report 2024. WEF. https://weforum.org/reports/global-risks-report-2024
[4] Indian Medical Association v. Union of India, W.P.(C) No. 645/2022 (Supreme Court of India 2022–2025).
[5] UNESCO & Ipsos. (2024). Disinformation and Elections: Global Survey on News Consumption and Trust. UNESCO. https://unesco.org
[6] Physicians Foundation. (2025, August). Survey of Physicians on Health Misinformation. Physicians Foundation. https://physiciansfoundation.org
[7] Securities and Exchange Board of India. (2024). Enforcement Actions Against Unregistered Investment Advisers and Finfluencers. SEBI. https://sebi.gov.in
[8] Advertising Standards Council of India. (2024). Annual Complaints Report 2024. ASCI. https://ascionline.in
[9] European Parliament. (2022). Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act). Official Journal of the European Union. https://eur-lex.europa.eu
[10] Election Commission of India. (2024). Advisory on Use of Artificial Intelligence and Deepfakes During Elections. ECI. https://eci.gov.in
[11] X Corp. v. Union of India (Karnataka High Court 2023).
[12] Ashwini Kumar Upadhyay v. Union of India, W.P.(C) No. 468/2019 (Supreme Court of India).
[13] Reserve Bank of India. (2022). Digital Lending Guidelines. RBI. https://rbi.org.in
[14] Nature. (2025, October 31). Cloud seeding and Delhi air quality: Evaluating intervention claims. https://nature.com
[15] CarbonCopy. (2024). Climate Misinformation in India: Public Perception Survey. https://carboncopy.info
[16] Central Consumer Protection Authority. (2024, October). Guidelines for Prevention of Misleading Advertisements: Green Claims. Ministry of Consumer Affairs. https://consumeraffairs.nic.in
[17] Supreme Court of India. (2025, March 26). Directions under Drugs and Magic Remedies Act regarding state grievance mechanisms [Order in IMA v. Union of India].
[18] First Check & DataLEADS. (2024). Health Misinformation in India: Patterns and Platforms. DataLEADS. https://dataleads.org
Cases Cited
Ashwini Kumar Upadhyay v. Union of India, W.P.(C) No. 468/2019 (Supreme Court of India).
Indian Medical Association v. Union of India, W.P.(C) No. 645/2022 (Supreme Court of India 2022–2025).
Kunal Kamra v. Union of India, W.P.(C) No. 220/2023 (Supreme Court of India).
X Corp. v. Union of India (Karnataka High Court 2023).




