Is Delfi restricting freedom of expression online in Estonia?

Lorna Woods, Professor of Media Law, University of Essex

Under what circumstances can online freedom of expression be restricted? The recent decision from the European Court of Human Rights Grand Chamber in the case of Delfi v. Estonia tackles this question, focusing on comments made in response to a news article. This ruling presents thought-provoking issues concerning both human rights and EU law, which will be examined in this analysis.

The Facts

Delfi, one of Estonia’s biggest online news platforms, allows readers to comment on articles while employing a system to moderate and remove inappropriate content. An article about ice bridges, considered impartial and balanced, generated many comments, some containing abusive language and threats towards an individual identified as L. Weeks later, L requested the removal of approximately 20 comments and sought damages. Delfi promptly removed the comments but refused to pay damages. The case proceeded to court, where L was awarded a reduced sum in damages. Delfi’s argument that it acted as a neutral intermediary and was therefore exempt from liability under the EU’s e-Commerce Directive was dismissed. The news outlet appealed to the European Court of Human Rights but lost in a unanimous chamber judgment. Subsequently, they appealed to the Grand Chamber.

The Grand Chamber Decision

The Grand Chamber, though not unanimously, upheld the initial judgment and its reasoning. They began by reviewing Article 10 principles of the European Convention on Human Rights, drawing from previous case law. While these are established legal tenets, the Grand Chamber, from the outset, seemed wary of the type of content found online.

They stated:

while the Court acknowledges that important benefits can be derived from the Internet in the exercise of freedom of expression, it is also mindful that liability for defamatory or other types of unlawful speech must, in principle, be retained and constitute an effective remedy for violations of personality rights. [110]

Referencing Council of Europe Recommendations, the Grand Chamber advocated for:

a “differentiated and graduated approach [that] requires that each actor whose services are identified as media or as an intermediary or auxiliary activity benefit from both the appropriate form (differentiated) and the appropriate level (graduated) of protection and that responsibility also be delimited in conformity with Article 10 of the European Convention on Human Rights and other relevant standards developed by the Council of Europe” (see § 7 of the Appendix to Recommendation CM/Rec(2011)7, ..). Therefore, the Court considers that because of the particular nature of the Internet, the “duties and responsibilities” that are to be conferred on an Internet news portal for the purposes of Article 10 may differ to some degree from those of a traditional publisher, as regards third-party content. [113]

The Grand Chamber applied established freedom of expression principles to the case. First, they acknowledged that Article 10(1) of the Convention was potentially violated, necessitating a three-part evaluation of the restriction. The evaluation criteria included: lawful basis, legitimate aim, and necessity in a democratic society. The existence of a restriction on freedom of expression and its alignment with a legitimate aim were not contested. However, the lawfulness and necessity in a democratic society were disputed.

Lawfulness

Lawfulness implies that the rule is accessible and its implications are foreseeable by the affected party. Delfi argued they couldn’t have predicted the Estonian Law of Obligations would apply, assuming they’d be protected under intermediary liability within the e-Commerce Directive. The national authorities rejected this interpretation, prompting Delfi’s claim of national law misapplication. The Grand Chamber reiterated that their function wasn’t to supersede domestic courts but rather to evaluate the compliance of adopted methods and their effects with the Convention. Considering the facts, although some signatory states employed a more nuanced approach as recommended by the Council of Europe, the Grand Chamber concluded that applying standard publishing regulations was foreseeable. Notably, they highlighted that:

as a professional publisher, the applicant company should have been familiar with the legislation and case-law, and could also have sought legal advice. [129]

Necessary in a Democratic Society

The Grand Chamber, citing established jurisprudence, emphasized the need for a “pressing social need” to justify limiting freedom of expression, given its significance. The court evaluated whether the action was balanced with the intended goal and if the justifications provided by national authorities were “relevant and sufficient.” While underscoring the media’s role, the court acknowledged that different standards could apply to different media forms. They reiterated their stance on the internet’s potential for both harm and benefit ([133]). Reaffirming the need for equilibrium between Articles 8 and 10, the court endorsed the First Chamber’s considerations: the comments’ context, Delfi’s efforts to prevent or eliminate defamatory comments, holding the comments’ authors accountable as an alternative to Delfi, and the impact of domestic legal proceedings on the news organization ([142-3]).

The Grand Chamber emphasized the nature of the comments: their classification as hate speech, their inherently unlawful nature ([153]), and asserted that expecting a news platform to regulate hate speech and incitements to violence wasn’t “private censorship” ([157]), given the widespread opportunities for online expression. The idea that a news platform should be aware of its content became crucial in determining proportionality. Considering this, Delfi’s response wasn’t deemed adequately prompt. Additionally, the court recognized that ’the ability of a potential victim of hate speech to continuously monitor the Internet is more limited than the ability of a large commercial Internet news portal to prevent or rapidly remove such comments’ [158]. Ultimately, Delfi’s fine was not substantial, and the action taken didn’t force a business model change. Therefore, the court found the interference justifiable.

Two concurring judgments and one dissent were presented. One concurring judge (Zupančič), after criticizing anonymous commenting, argued:

To enable technically the publication of extremely aggressive forms of defamation, all this due to crass commercial interest, and then to shrug one’s shoulders, maintaining that an Internet provider is not responsible for these attacks on the personality rights of others, is totally unacceptable.

According to the old tradition of the protection of personality rights, …, the amount of approximately EUR 300 awarded in compensation in the present case is clearly inadequate as far as damages for the injury to the aggrieved persons are concerned.

Human Rights Issues: Initial Reaction

This lengthy ruling will undoubtedly generate extensive discussion. Immediate concerns stem from the Court’s perception of the internet as a platform for harmful and defamatory material, a perspective seemingly influencing their Article 10(2) analysis and, notably, the balancing of Articles 10 and 8. Acknowledging the distinct operational contexts and impact of various media forms, the Grand Chamber seemingly overlooked the crucial role of intermediaries in providing and managing information. While recognizing that the internet might entail different ‘duties and responsibilities’, the required standard of care appears high.

The notion that the portal has control over user-generated content ignores the complexities of information management. The concurring opinions strongly emphasized the difference between requiring a portal to remove clearly illegal content independently and demanding pre-publication screening of user contributions. While true, both necessitate monitoring (or an improbable ability to predict when hate speech might be posted). In fact, the dissenting judges argued that this requirement is nearly identical to blanket prior restraint (para 35). Both stances implicitly reject notice-and-takedown systems widely used in Europe, potentially influenced by the e-Commerce Directive. The focus on content resulted in a nearly reversed approach to freedom of expression: speech must be preemptively justified to avoid liability. This approach seems to disregard the court’s own case law on political speech and the repeated emphasis on the media’s societal importance.

EU law elements: consistency with the e-commerce Directive?

The Delfi judgment raises practical questions for news platforms hosting third-party content, particularly reader comments.  A key concern is how this judgment aligns with the EU’s policy approach toward the internet and intermediaries.  The eCommerce Directive, in articles 12-15, aims to limit intermediary liability.  These provisions were deemed crucial for the seamless flow of services within the EU and to foster internet development and online service offerings.  The eCommerce Directive identifies three intermediary categories – those serving as mere conduits, those providing caching, and those hosting content.  The defining characteristic of these intermediaries is their role as facilitators through technical services rather than contributing to specific content.  Uncertainty surrounds the scope of this last category, particularly with the emergence of diverse services that challenge the internet’s understanding at the time of the directive’s enactment.  The initial chamber decision sparked concerns that the judgment disregarded the underlying policy choices regarding intermediaries and their crucial role in internet functionality, particularly for users.  The question arises: to what extent, if any, does the judgment deviate from the Directive?

Before delving into the substance, it’s important to note that the Strasbourg Court wasn’t deciding whether Delfi was a neutral or passive intermediary.  Rather, the court evaluated the Estonian court’s reasoning.  In essence, it remains unclear whether the court acted unreasonably—considering current jurisprudence from the European Court of Justice—in accepting the Estonian court’s final conclusion (even if certain aspects of their reasoning are debatable).

The intermediary liability provisions provide varying protection levels, with the highest afforded to primarily technical services.  For hosting services, protection depends on the absence of knowledge about the infringing content.  Interpretations of certain phrases in Article 14(2) of the Directive, such as ‘awareness’, ‘actual knowledge’, and the obligation to act ’expeditiously’, have been debated. The Directive suggests notice and take down systems as a means to address problematic content. Articles 14 and 15 do not restrict Member States from requiring hosting providers to exercise reasonable care in detecting and preventing specific illegal activities as defined by national law (recital 48). Article 15 prevents member states from imposing a general obligation on internet intermediaries, concerning activities covered by Articles 12 to 14, to proactively monitor transmitted or stored information or to seek out circumstances suggesting illegal activity. Article 15 doesn’t preclude public authorities in Member States from imposing monitoring obligations in specific, clearly defined individual cases (recital 47). It is implicit that Article 15 applies solely to intermediaries qualifying for benefits under Articles 12, 13, or 14.

Several cases brought before the European Court of Justice have sought to clarify Article 14’s scope and the extent of protection provided by Article 15. For example, SABAM v Netlog (Case C-360/10) involved a social media site that received a request from SABAM, a Belgian copyright organization, to implement a comprehensive filtering system to prevent the unauthorized use of copyrighted music and audiovisual material.  The ECJ, in addition to confirming the Article 15 prohibition on monitoring, noted that filtering systems might not effectively differentiate between legal and illegal content, potentially impacting users’ freedom of expression (access to information).  Here, the ECJ seems to mirror the ECtHR’s stance in Yildirim concerning ‘collateral censorship’.  Applying Netlog’s concepts to Delfi is limited by Article 15’s application to neutral intermediaries. It’s unclear if the ECJ would classify a news platform as neutral, given its agenda-setting function, which might be seen as ‘inviting’ certain types of responses, or its implementation of filtering and moderation systems.

In the Google Adwords case (Joined Cases C-236/08, C-237/08 and C-238/08, judgment 23rd March 2010), the ECJ determined that a service provider qualifies for Article 14 ECD protection if its conduct is ‘“neutral, in the sense that its conduct is merely technical, automatic and passive, pointing to a lack of knowledge or control of the data which it stores”’ (para 114).  One could argue that by inviting commentary on specific subjects, a website loses its neutrality, though the explicitness of that invitation might be questioned. In L’Oreal (Case C-324/09, judgment 12 July 2011), the Court decided that Article 14’s exemption shouldn’t apply if the host actively participates in presenting and promoting user-generated content, thereby gaining knowledge or control over that data.  To benefit from the Article 14 exemption, the host must remove offending data upon becoming aware of information that would alert a “diligent economic operator” to unlawful activities.  This raises questions about the role of moderation and filtering in this context, particularly regarding intermediary control over content.  Regarding the Delfi case, there are notable similarities between the ECJ and ECtHR approaches. Both courts seem to agree that entities conducting business are better positioned to anticipate and assess potential issues.  However, their perspectives on commercial activities differ. The ECJ argued in Google Adwords:

It must be pointed out that the mere facts that the referencing service is subject to payment, that Google sets the payment terms or that it provides general information to its clients cannot have the effect of depriving Google of the exemptions from liability provided for in Directive 2000/31. [116]

The reference to ‘general information’ suggests that contributors’ policies wouldn’t be the deciding factor.

In Papasavvas v O Fileleftheros, a case concerning online defamation related to a news story published on a newspaper’s website (discussed earlier here), the ECJ applied the criteria from L’Oreal v. eBay and Google Adwords and ruled:

Consequently, since a newspaper publishing company which posts an online version of a newspaper on its website has, in principle, knowledge about the information which it posts and exercises control over that information, it cannot be considered to be an ‘intermediary service provider’ within the meaning of Articles 12 to 14 of Directive 2000/31, whether or not access to that website is free of charge. [45]

There are parallels in reasoning between the Strasbourg court and the ECJ, both highlighting the idea of control over information.  However, differences exist in the degree of control over defamatory content. In Papasavvas, control was more direct than in Delfi, and the case didn’t address newspapers’ ability to predict audience reactions to stories.  Nevertheless, given its reasoning in L’Oreal regarding the ‘promotion’ of certain content and the requirements of a diligent economic operator, it’s plausible the ECJ wouldn’t dismiss the agenda-setting argument made by the Strasbourg court.

The Strasbourg court’s reasoning essentially compels Delfi to monitor user content.  If Delfi had been classified as an intermediary as defined by Articles 12-14, this would have contradicted Article 15 of the eCommerce Directive and its implementation in national law.  Since Delfi wasn’t considered such an intermediary, Article 15 doesn’t apply, thus avoiding a direct conflict between the ruling and the EU law position.  Whether this outcome benefits internet policy is a separate issue.  This case and its implications might influence the EU Commission’s upcoming review of intermediaries within its Digital Single Market strategy.

*Part of this post was previously published on the LSE Media Policy Project blog

Barnard & Peers: chapter 9

Photo credit: speech-rights-and-restrictions.weebly.com

Licensed under CC BY-NC-SA 4.0