Regulating Social Media: The Balance Between Freedom and Responsibility

Published on 05th June 2025

Authored By: Aman Laxminarayan Goyal
Madhusudan Law University

Introduction

When Facebook whistleblower Frances Haugen spoke before Congress in 2021, one of her more shocking claims was that “The company’s leadership knows how to make Facebook and Instagram safer but won’t make the necessary changes because they have put their astronomical profits before people. Her testimony crystallized an increasingly clear truth — that the digital public squares where billions increasingly gather are ruled not by democratic norms, but by profit-maximizing algorithms designed in private.

The challenge of oversight of these massive virtual domains can be reframed by a classical philosophical conundrum, called the Ship of Theseus. In this philosophical experiment, if every wooden plank of a ship is progressively replaced one at a time, at what point — if at all — does this ship become another ship entirely? Likewise, as regulations are gradually imposed on social media platforms, a tipping point must be established: When does well-meaning governance alter the free and open internet so profoundly that it is unrecognizable from its founding vision? When do protections cross the line into restrictions? When does moderation cross the line into censorship?

That balancing act was vividly demonstrated in 2023 when Twitter (now X) was temporarily blocked in Brazil after it had failed to comply with judicial orders to take down content. A Brazilian Supreme Court justice, Alexandre de Moraes, defended the action as essential to fighting disinformation, while the platform’s owner, Elon Musk, condemned it as censorship. It is the complex terrain between these polarized positions within which modern regulatory frameworks need to be constructed.

From Digital Utopianism to Regulatory Urgency

Governments of the Industrial World… You have no sovereignty where we gather,” the late John Perry Barlow proclaimed in his 1996 “Declaration of the Independence of Cyberspace. This libertarian hope inspired early internet building but feels naive now that digital platforms are twice weaponized against the democratic institutions they were meant to nurture.

The implications are no longer abstract. In Myanmar, when Facebook was used to incite violence against Rohingya Muslims, United Nations investigators concluded that the platform had played a “determining role” in the crisis. While much of the misinformation about the COVID-19 vaccines originated on social media platforms, it subsequently spread across platforms, triggering vaccine hesitancy and ultimately excess mortality. In the United States, the 2021 Capitol riot was planned at least in part via social media, and election systems around the globe endure coordinated disinformation campaigns.

“What we are experiencing now is a crisis of the digital commons,” says Professor Shoshana Zuboff of Harvard Business School, who has tracked the way user data are commodified to see into and shape behavior in her surveillance capitalism research. “The social costs get externalized and the profits get privatized—a classic market failure that calls for regulatory remedy.”

These concerns are larger than conventional partisan binds. Conservative voices express concerns about viewpoint exposure and censorship by platform gatekeepers. As for progressive critics, they concentrate on monopolistic power and algorithmic amplification of extremism. Parents of all political stripes shriek about mental health effects on youth and violations of privacy. Small businesses grumble about arbitrary policy enforcement that puts their livelihoods in jeopardy.

The Regulatory Chessboard: A Strategic Framework

Media regulation can be seen as a chess game played on a very unique board with many dimensions. Using this analogy, chess pieces play a role similar to regulatory mechanisms — each with unique capabilities and constraints:

The king stands for the fundamental rights — of free expression, of privacy — that need to be defended. The entire regulatory structure is jeopardized when these are under threat.

The queen — powerful, but also adaptable — represents comprehensive legislative frameworks, like Europe’s Digital Services Act. These do have multiple potential lines of movement but wield a substantial amount of political capital to use.

The bishops, moving diagonally across the board, are the sectoral rules — those applying to a specific issue, such as data protection law or competition law.

Knights move in an L-shape and signify new flavors of regulation that skirt traditional hurdles, such as mandatory interoperability or algorithm audits.

The rooks advancing represent enforcement mechanisms and sanctions that lend regulations their bite.

The pawns are relatively many, but each is very limited; they denote the smaller policy interventions and technical standards that, when linked together, define the field.

Social media platforms are like a chessboard with players making simultaneous moves each working towards their self-interests. Tech companies send legions of lobbyists and lawyers to shield their business models. Organized civil action works for the user’s rights. Governments form coalitions to exercise sovereignty and balance competing interests. Taking a Choice approach, users express preferences through the choices they make about the platforms they are prepared to engage with, but they have limited power due to the abiding effect of networks.

The complexity is increased even further by the fact that the board itself is always changing. By the time TikTok had become a dominant platform in 2020, the regulatory frameworks aimed at Facebook and Twitter seemed suddenly out of date. In 2023, as generative AI was added to social platforms, new risks sprung up overnight. That technological dynamism calls for agile regulatory responses, not stony frameworks.

Current Regulatory Landscape: A Fragmented Response

The world’s response to social media challenges reveals sharply contrasting approaches that reflect differing values and constitutional traditions.

With its trilogy of digital policy — the General Data Protection Regulation (GDPR), the Digital Services Act (DSA) and the Digital Markets Act (DMA) — the European Union has emerged as the regulatory pacesetter. Such frameworks focus on user rights, system risk mitigation and graduated obligations based on platform size.

“The European approach treats digital spaces as extensions of our physical public sphere, with similar standards and expectations,” explains Margrethe Vestager, Executive Vice President of the European Commission. “The era of treating these platforms as neutral intermediaries is over.”

The United States, by contrast, has not taken as hands-on an approach. If anything, Section 230 of the Communications Decency Act still protects platforms from liability for user-generated content — a law that the legal scholar Jeff Kosseff once called “the twenty-six words that created the internet. But now this immunity is becoming more contested for both sides of the aisle.

In authoritarian contexts, controlling social media has often been used to bolster state power rather than protect citizens. China’s Great Firewall is the most sophisticated system of digital censorship in the world; the Russian “sovereign internet” law gives the government the ability to disconnect from the global network entirely in “emergencies.”

These competing approaches result in a fractured regulatory system where platforms face conflicting legal obligations. The EU demands some content be taken down, while the U.S. wants it protected as free speech — and that doesn’t make for an easy compliance challenge for platforms. When India requires traceability of encrypted messages even as privacy advocates warn of security risks, core values clash directly.

Evidence-Based Regulatory Principles

To work, social media regulation must be grounded in evidence, not moral panic or industry talking points. Evidence and insights from disciplines as diverse as computer science and psychology, political science and law can offer important principles for the calibrated regulation needed here.

Proportionality and Risk Calibration

The same risks are not inherent in all platforms. While TikTok’s algorithmic recommendation model is built for entertainment, LinkedIn’s is for the profession. Messaging apps present different challenges than content-hosting platforms. Regulatory burdens should be proportionate to platform size, function, and documented risk (before risk was created).

This principle is already in play in the European Union’s Digital Services Act, which holds “very large online platforms,” defined as having over 45 million users, to elevate such standards as risk assessments, independent audits and access for researchers. These early signs indicate that this tailored approach avoids the error of imposing disproportionate burdens on smaller platforms and ensures that dominant players are held to account for systemic risks

This differentiated approach has translated into financial implications: very large platforms are expected to incur €15-20 million in annual compliance costs under the DSA, whereas small and medium platforms will incur far lower costs or be exempt from certain provisions.

Procedural Justice and Enforcement Transparency

Errors are inevitable in content moderation at scale. What is important is what is done about these mistakes. Users whose content is removed or whose accounts are suspended should receive clear notice, an explanation of the violation, and an easily accessible appeals process.

We know this because there are concrete metrics for evaluating platform governance—and as luck would have it, they exist in the form of the Santa Clara Principles on Transparency and Accountability in Content Moderation, which were developed by a coalition of academics and civil society organizations. The principles stress notice and opportunity to appeal, as well as transparency reporting, as minimum standards for content moderation that can not only be called legitimate but that it is legitimate to consider a contract to have been made, to begin with.

As Jennifer Broxmeyer, a former content policy director at Meta, puts it: “Scale makes error inevitable. “The question is not whether mistakes will occur, but whether there are systems in place to detect and correct them promptly.”

Human Rights Framework

International human rights law offers nuanced mechanisms to balance competing rights and interests. Article 9 of the UN Guiding Principles on Business and Human Rights establishes explicit expectations for independent business respect for human rights, irrespective of state obligations.

Such limitations on expression should be necessary proportionate and prescribed by law for a legitimate aim. Platforms should assess human rights impacts before entering new markets/delivering high-impact features.

When Twitter launched in Nigeria, a failure to attend adequately to localized context meant that the platform was, at times, used to plan acts of violence against ethnic minorities. A rigorous human rights impact assessment may have potentially identified these risks and put relevant safeguards in place before harm happened.

Regulatory Agility and Evidence-Based Iteration

And the risk of regulatory paralysis is greater in the presence of the rapid evolution of new digital technologies. Rigid, inflexible rules soon become irrelevant or are undermined by technological workarounds.

This includes regulatory sandboxes that allow new rules to be piloted in controlled environments, sunset provisions that require periodic review of regulations, and co-regulatory models that harness the expertise of multiple stakeholders. Feedback loops embedded in regulatory design allow frameworks to adjust to new challenges.

One existing model is Canada’s regulatory sandbox for fintech which could be adjusted for content moderation that would allow platforms to launch sandboxed test cases of novel harmful content detection algorithms with regulatory oversight.

Practical Regulatory Mechanisms

Turning principles into practice requires specific mechanisms tailored to various aspects of how platforms operate.

Platform Accountability Via Fiduciary Duties

Legal scholars Jack Balkin and Jonathan Zittrain have proposed an information fiduciary model in which the platforms that process user data would owe duties of care, loyalty, and confidentiality. Just as doctors must put their patients’ welfare ahead of their financial interests, such platforms could be required to put the well-being of their users ahead of engagement metrics in the face of conflict.

Such an approach has also gained bipartisan support in the United States because it builds on established legal traditions rather than creating entirely new regulatory categories. By concentrating on the relationship between platforms and users instead of individual content, it may find its way around First Amendment restrictions with less difficulty.

The Information Fiduciary model could have helped in instances like Instagram’s internal research showing the platform made body image issues worse for teenage girls—information —the company allegedly hid while it continued optimizing for engagement.

Breaking Network Effects Through Interoperability

When Facebook acquired Instagram and WhatsApp, its dominant position was further entrenched. Users dissatisfied with platform policies face high switching costs due to network effects—the value of a social network depends primarily on who else uses it.

Under mandatory interoperability requirements, users would be able to communicate with those on competing services or transfer their data and social connections to new providers. Technical protocols for such interoperability already exist and have been used in telecommunications, email and banking.

The EU’s Digital Markets Act is a step in this direction, mandating “gatekeeper” platforms to allow interoperability with third-party offerings. Preliminary evidence suggests this has indeed spurred innovation by lowering barriers of entry for competing startups that threaten incumbent platforms.

Algorithm Accountability and Design Standards

Algorithms exert enormous influence over information flows, but they do so with little oversight. By focusing on requirements for audits, risk assessments, and safety standards, this type of regulation would target the algorithmic amplification of harmful forms of speech, while avoiding direct regulations of speech.

France’s law will require platforms to build recommendation systems that are safe, fair, explainable and subject to user control. Such regulation targets upstream causes instead of downstream symptoms because it focuses on system design instead of individual content decisions.

When YouTube changed its recommendation algorithm in 2019 to limit the promotion of borderline content, the amount of watch time for that content dropped by 70 percent — showing how much technical design choices affect information ecosystems.

Transparent Advertising Models

The surveillance advertising model underlies many of the harm caused by platforms, instances ranging from privacy violations to attentional manipulation. Scrutiny of this area in the online space would involve comprehensive ad libraries, exhaustive targeting disclosure, and serious consent mechanisms.

Jurisdictions might choose more aggressive interventions such as a ban on microtargeting for sensitive categories (political, housing, employment) or a prohibition on behavioral advertising to children under 16. Such measures could be more effective than content-focused regulation alone by targeting business model incentives that lead to harmful design decisions.

The tangible impact of measures like this is already showing: After Apple introduced App Tracking Transparency — a requirement that users give express consent before tracking them across apps — just 24% of users opted in, demonstrating that consumers are averse to surveillance-based advertising when presented with real choice.

Implementation Pathways and Political Feasibility

Even the best-intentioned regulatory frameworks do not translate well into practice. Practical pathways must contend with political realities, technocratic constraints, and institutional limitations.

Building Technical Capacity in Regulatory Bodies

Most agencies don’t have the expertise to oversee complex digital platforms. The United Kingdom’s creation of a specialized Digital Markets Unit within its competition authority is one recipe for developing targeted regulatory capability. Such investments are needed across jurisdictions, underwritten by technical training programs and competitive pay to recruit talent from the industry.

The price for this kind of capacity building is low relative to the economic and social damage that unregulated platform power can cause. Australia’s eSafety Commissioner office has an annual budget of around AUD 30 million — barely a drop in the billions of revenue the platforms it oversees rake in.

International Coordination Mechanisms

Because social media is inherently global, we need international conversations about common standards. Complete harmonization is both unfeasible and neither desirable, but baseline principles and coordinated mechanisms can ease administrative and compliance burdens as well as regulatory arbitrage.

One model for cross-border regulatory cooperation is the network of Digital Services Coordinators set out by the EU through its Digital Services Act. Existing international organizations or new purpose-built forums could develop similar structures on a global scale.

Navigating Constitutional Constraints

Different constitutional traditions set different limits on what regulatory approaches might be permissible. American law under the First Amendment imposes peculiar limitations on content-based regulation that are absent in European or Asian settings.

These constraints may be navigated with creative regulatory design. For instance, targeting platform design economic incentives or procedural requirements as opposed to particular content categories may serve protective goals without intruding on speech protections.

Conclusion

As we navigate the regulatory thicket of social media governance, the Ship of Theseus offers us our fundamental challenge: how do we preserve the democratizing potential of these platforms while mitigating their demonstrable harms? This game of chess among stakeholders plays out against an ever-changing board, with each move re-shaping our digital public square.

Neither uncritical reliance on the self-regulation of the industry nor the heavy hand of the government is needed moving forward. Rather, nuanced regulatory frameworks rooted in democratic values, empirical evidence, and implementation strategies can improve platform accountability without hindering innovation.

The decisions made today will shape digital communication — and thus democratic discourse — for generations. States are “laboratories of democracy,” as Justice Louis Brandeis famously said, places where policy experiments can be tried out. The global regulatory landscape will look familiar to students of innovation experiments; as such, the time is right to share lessons about what does (and does not) work from balancing digital freedom and digital responsibility.

The issue is not whether social media platforms should be regulated — they are, if still largely based on opaque terms of service and proprietary algorithms instead of democratic processes. The question is whether we will have regulation that is established through transparent, accountable public institutions or regulation that is hidden behind corporate closed doors. The answer will shape not just the future of technology, but of democracy itself.

 

References

[1] Testimony of Frances Haugen, Senate Subcomm. on Consumer Protection, Product Safety, and Data Security (Oct. 5, 2021).

[2] Brazil Sup. Fed. Tribunal, ADPF 949/DF, Relator: Justice Alexandre de Moraes (Apr. 17, 2023).

[3] John Perry Barlow, A Declaration of the Independence of Cyberspace, Elec. Frontier Found. (Feb. 8, 1996), https://www.eff.org/cyberspace-independence.

[4] Hum. Rts. Council, Rep. of the Independent International Fact-Finding Mission on Myanmar, U.N. Doc. A/HRC/39/64, at 14 (Sept. 12, 2018).

[5] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 8-12 (2019).

[6] Regulation 2022/2065, of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and Amending Directive 2000/31/EC, 2022 O.J. (L 277) 1 (EU).

[7] Jeff Kosseff, The Twenty-Six Words That Created the Internet 2-3 (2019).

[8] Adrian Shahbaz & Allie Funk, Freedom on the Net 2021: The Global Drive to Control Big Tech, Freedom House 1, 15-18 (2021).

[9] Eur. Comm’n, The Digital Services Act Package, COM (2020) 825 final (Dec. 15, 2020).

[10] Commission Staff Working Document Impact Assessment Report, SWD (2020) 348 final (Dec. 15, 2020).

[11] The Santa Clara Principles on Transparency and Accountability in Content Moderation (2018), https://santaclaraprinciples.org.

[12] U.N. Off. of the High Comm’r for Hum. Rts., Guiding Principles on Business and Human Rights, U.N. Doc. HR/PUB/11/04 (2011).

[13] Jack M. Balkin, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev. 1183, 1186 (2016).

[14] Regulation 2022/1925, of the European Parliament and of the Council of 14 September 2022 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. (L 265) 1 (EU).

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top