Published On: April 1st 2026
Authored By: Venugopala SG
Karnataka State Law University (KSLU)
Abstract
Section 79 of the Information Technology Act, 2000 provides safe harbour protection to internet intermediaries from liability arising out of the conduct of third parties. However, the rise of algorithmic governance in e-commerce platforms has fundamentally altered the nature of intermediation. This article examines whether e-commerce platforms that deploy algorithms for demand steering, ranking manipulation, and consumer profiling continue to qualify as neutral intermediaries under Section 79, or whether their active algorithmic control disqualifies them from safe harbour immunity. Drawing on Indian jurisprudence, United States circuit court decisions, and the European Union’s Digital Services Act, this article argues that platforms monetising algorithmic control over consumer choice operate beyond passive facilitation and should be held liable for algorithmic negligence.
I. Introduction
Intermediaries are third-party entities that stand between a seller and a consumer and facilitate transactions between them. Section 79 of the Information Technology Act, 2000 (IT Act) provides safe harbour protection to internet intermediaries from liability that may arise from the unlawful conduct of the parties they serve. However, these intermediaries carry an obligation to act with reasonable care under Section 79(2) in the course of facilitation. In the present technology-driven era, the concept of intermediaries is frequently discussed in relation to e-commerce platforms, where different categories of algorithms serve different purposes, including demand steering, seller visibility, and advertising. These algorithms operate on the commands of the platform and on data inputs. Even though algorithms make transactions more efficient, they have limitations that blur the rational boundaries of e-commerce platforms.
E-commerce platforms often shield themselves through safe harbour by characterising these algorithms as automated systems that neutrally process user inputs. In reality, algorithms are designed and monetised to operate in a particular order based on human decisions. E-commerce platforms frequently shift responsibility for harms such as price manipulation, fake discounts, and biased rankings to sellers, while the algorithms themselves control product visibility. In this process, the concept of due care becomes illusory, as algorithms determine day-to-day outcomes. Even though immunity frameworks prescribe reasonable care, e-commerce platforms conceal themselves behind the automated work of algorithms when disputes arise. This exposes the need to hold e-commerce platforms liable for failures of algorithmic governance within the broader scope of Section 79 of the IT Act.
II. Understanding Safe Harbour: Original Rationale
Intermediaries are entities that facilitate a structure for conduct between various parties without intervening in the originality or subject matter of what is transacted. Because they act as marketplaces, they ordinarily lack control over the behaviour of the parties. They provide a space for transactions to occur or for ideas to be shared. Intermediaries therefore do not embody every piece of work produced or shared in the marketplace; their purpose is only to facilitate. On a prima facie view, they should accordingly be protected from liability where a produced work, transaction, or innovation carries illegality, since they are regarded as passive infrastructure or neutral conduits, on the assumption that they perform nothing beyond neutral facilitation and structuring. This protection is referred to as safe harbour, and it is grounded in the objective of promoting and protecting innovation while ensuring flexibility in the marketplace.
III. Indian Position: Section 79 of the IT Act
Section 79 provides that intermediaries shall not be liable for any third-party information, data, or communication link made available or hosted by them. The conditions for maintaining this immunity are as follows:
1. Intermediaries must not initiate transmission.
2. Intermediaries must not select the receiver of a transmission.
3. Intermediaries must not modify any information contained in the transmission.
These intermediaries must also observe due diligence based on directions of legal authorities in the discharge of their duties, including the removal of illegal content.[1]
IV. Algorithmic Intermediation: From Passive Hosting to Active Governance
Algorithmic negligence is the failure of platforms to exercise due care in deploying algorithms in a manner that affects the decision-making of consumers and crosses the boundary of passive facilitation. Algorithms function as the invisible hand of platforms, performing entire marketplace management in such a way that the freedom of choice of the consumer is confined to the results the algorithms produce. The results are therefore bounded by the digital environment the platform constructs for its consumers.
The following are illustrations of algorithmic choice-making by platforms:
1. Demand Steering: Platforms direct users towards specific products by providing selective data in order to gain profit from their own private-label brands.
2. Ranking Manipulation: In many e-commerce platforms, products are not displayed based on best ratings and reviews. Products with low or no ratings frequently appear at the top of search results.
3. Visibility Bias and Sponsorship: Platforms can limit the visibility of products not covered under platform logistics. Genuinely relevant matches are substantially pushed down by sponsored advertisements.
4. Creation of Urgency and Personalised Preference: E-commerce platforms use flash sales, limited-time offers, and similar discounts to create urgency, which drives large numbers of consumers towards preferred brands.
Have we observed something? Some categories of algorithms are no longer tools of hosting alone; they structure the choices of consumers. And crucially, algorithms are directed to produce such outputs. They are the tools through which platforms implicitly and substantially influence decision-making. Should such platforms retain safe harbour immunity? Section 79 of the IT Act provides guidance on this question.
Section 79(2)(b) states that intermediaries must not:
– initiate transmission;
– select the receiver of a transmission; or
– select or modify the information contained in a transmission.
Algorithms actually initiate transmission through recommendation pop-ups directing a consumer towards a specific product. They select the receiver of the transmission (the consumer) based on the consumer’s needs and the platform’s profit-motive directives. By generating the recommendation interface without due diligence and disclosure, platforms have already lost part of their intermediary immunity.
With regard to the third condition, that intermediaries must not select or modify information contained in transmission, it follows that platforms should disclose information as-is and must not take sides. Platforms that use algorithms to recommend products based on specific keywords, while ignoring other information, implicitly violate the third condition by enabling selective visibility and modification of information.
Even where platforms invoke automated-systems defences and deny human intervention, the fact that the design of the algorithm is directed by human decisions satisfies the criterion of human intervention in the choice-making process.
This indicates that intermediaries relying on algorithms beyond the limits of passive facilitation should not be granted immunity under safe harbour protection.
In Christian Louboutin SAS v. Nakul Bajaj,[2] the court held that an e-commerce intermediary that goes beyond passive hosting to control logistics, packaging, and customer engagement may lose safe harbour protection. This case is relevant to the present discussion because the platform’s control over customer engagement is analogous to algorithmic customer selectiveness, product selectiveness, and the creation of urgency through recommendations.
Combining the statutory framework with the principle in Nakul Bajaj, it may be attributed that courts will decline to grant intermediary immunity where algorithmic features embed choices, options, and behaviours designed to influence the consumer.
V. Global Standpoint
In the United States, Section 230 of the Communications Decency Act (CDA)[3] provides a similar framework by not treating platforms as publishers or speakers of third-party content. The traditional judicial approach has treated algorithms as neutral tools performing general transmission functions, but recent judicial trends have questioned this approach by identifying the grey area of content amplification and active participation.
Stuart Force v. Facebook, Inc. (2d Cir. 2019):[4] In this case, the applicability of Section 230 with regard to the immunity of Facebook’s algorithms was examined. The majority accepted algorithms as neutral tools and held that Section 230(c)(1) should be broadly construed. However, Chief Judge Katzmann, dissenting, argued that Section 230(c)(1) does not expressly grant immunity for the full range of platform activities. He observed that it protects only the exercise of a publisher’s traditional editorial functions (such as whether to publish, postpone, or alter content), and not algorithms that create and communicate their own messages through recommendation patterns that mimic actual conversations.
This analysis provides the boundary up to which safe harbour immunity may operate in the Indian context. It indicates that the aim of facilitation must not extend to the point where algorithms directly urge user behaviour through recommendations that create an implied form of control.
Anderson v. TikTok, Inc. (3d Cir. 2024):[5] The Third Circuit held that TikTok’s recommendation algorithms are not necessarily protected by Section 230 of the CDA because recommendation decisions constitute the expressive activity of the platform. This may be interpreted to mean that algorithmically recommended results are not merely third-party content; rather, platforms bear responsibility for choosing which content is published and promoted. This case distinguishes between algorithmic prioritisation as passive hosting and algorithmic prioritisation as active recommendation.
M.P. v. Meta Platforms, Inc. (4th Cir. 2025):[6] The Fourth Circuit further elaborated on the nature of algorithmic prioritisation and how such prioritisation reinforces platform liability. The court held that an algorithmic recommendation feature that recommends extremist groups or individuals constitutes the platform’s own speech. Because such speech is not third-party conduct, it may lose immunity under Section 230 of the CDA.
As of 2025 in the United States, lawsuits framed against platforms for algorithmic negligence have begun to characterise algorithms as “defective products” rather than hosting services. This approach treats algorithmic designs that allegedly radicalise or harm users as defective products, thereby placing them beyond the line of mere facilitation.
Recent debates on amending Section 230 have proposed a “duty of reasonable care,” requiring platforms to exercise reasonable caution in the design and management of recommendation systems to prevent harm.[7]
The European Union, in its Digital Services Act,[8] maintains safe harbour exceptions only for services that expressly operate on the basis of neutrality, such as mere conduit, caching, and hosting. It also introduces governance requirements for algorithmic recommendations. EU law emphasises that if an algorithm actively shapes user interactions by its own means, the platform is doing more than facilitating.
VI. Way Forward
Section 79 of the IT Act has already provided guidelines for granting safe harbour protection. However, the formal, surface-level criteria must be re-defined into a functional role analysis, where the focus will be on the degree of control and the method of facilitation, verifying that an intermediary does not, through any category of algorithm, shape the outcome of a transaction. The scope of Section 79(2) with respect to reasonable care should be expanded to include periodic auditing of algorithmic outputs and enforcement of responsible design standards for recommendation systems. With all these measures in place, safe harbour protection should be preserved to encourage innovation and protect genuine intermediaries, but it must not provide immunity where platforms monetise control and shape consumer choice.
References
[1] Information Technology Act 2000 (India), s 79.
[2] Christian Louboutin SAS v Nakul Bajaj [2018] SCC OnLine Del 12215.
[3] Communications Decency Act 1996, 47 USC § 230.
[4] Force v Facebook, Inc 934 F 3d 53 (2d Cir 2019).
[5] Anderson v TikTok, Inc 114 F 4th 233 (3d Cir 2024).
[6] M.P. v Meta Platforms Inc 4th Circuit 2025.
[7] Lauren Feiner, “Lawmakers Want to Let Users Sue Over Harmful Social Media Algorithms” (The Verge, 19 November 2025) accessed 10 February 2026.
[8] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services (Digital Services Act) [2022] OJ L277/1.




