A case involving Facebook, defamation, and free speech is currently pending in the CJEU.

Preliminary Notes on the Pending Case Glawischnig-Piesczek v. Facebook Ireland Limited

Dr Paolo Cavaliere, University of Edinburgh Law School, paolo.cavaliere@ed.ac.uk

Introduction

The Court of Justice of the European Union will soon release a potentially groundbreaking decision regarding political speech and internet platform liability. The court will address two related but distinct issues: the extent of platforms’ responsibility to remove illegal content, specifically if it applies to content similar to posts previously deemed unlawful, and whether such obligations apply beyond the court’s jurisdiction. This case will essentially determine how much responsibility platforms have in proactively evaluating the legality of user-generated content and how much power courts have in enforcing their standards of acceptable speech globally.

Summary of the case

The plaintiff, a former Austrian MP and Green Party spokeswoman, criticized the Austrian government’s refugee crisis response in a 2016 news article. A Facebook user shared this article on their personal profile with a derogatory comment. The plaintiff requested Facebook to remove the post in July, but the platform refused, claiming it didn’t violate their terms of service or Austrian law. Consequently, the plaintiff sought legal action, requesting the removal of the original post and any similar content. Facebook removed the initial post after the Vienna Commercial Court deemed it unlawful.

However, the court argued that Facebook’s failure to remove the post upon the plaintiff’s first request meant they weren’t exempt from secondary liability. They ordered Facebook to remove any similar future posts containing the plaintiff’s picture and “analogous” comments. This part of the injunction was overturned by the Vienna Higher Regional Court, which argued it amounted to general monitoring. They upheld that the original post was unlawful and should have been removed upon the plaintiff’s first notification and confirmed Facebook must remove any future posts using the same derogatory language with any picture of the plaintiff. Facebook appealed this decision to the Austrian Supreme Court.

The Supreme Court referred two main sets of questions to the Court of Justice of the European Union (CJEU):

  • First, they questioned whether requiring host providers to remove posts “identically worded” to illegal content aligns with Article 15(1) of the E-commerce Directive. If so, they inquired if this extends to content that is similar in substance but uses different wording. These questions address platform responsibilities in assessing unlawful speech and the limitations of “active monitoring.”

  • Second, they questioned if national courts could order platforms to remove content globally or only within their jurisdiction. This addresses the acceptability of extra-territorial injunctions for content removal.

Analogous content and active monitoring

It’s crucial to note that while framed as defamation, the dispute centers on political speech. Although the Austrian court deemed the Facebook post a violation of Article 1330 of the Austrian Civil Code protecting individual reputation, the plaintiff’s status as a national political party spokeswoman adds another layer. European Court of Human Rights (ECtHR) case law emphasizes a narrower defamation definition and broader acceptable criticism for politicians, especially regarding public statements. In this case, the plaintiff publicly addressed her party’s immigration policy, a significant detail considering the ECtHR links political speech to public interest, requiring minimal interference. By European standards, the content qualifies as political commentary, making the case’s outcome a potential new standard for online political speech.

While intermediaries enjoy immunities under the E-commerce Directive, including a ban on general monitoring obligations imposed by state authorities, a 2011 report by the UN Special Rapporteur on freedom of opinion and expression online clarified that blocking and filtering measures are acceptable if they target internationally prohibited speech categories, with content determination made by a judicial authority. A court order specifying the exact phrases deemed offensive could provide platforms with clear guidance, depending on the order’s clarity.

Essentially, the demand to remove “identical” content reflects the increasing pressure on platforms to actively filter content. Recurring unlawful content, whether identical or similar, is a growing problem. During a 2017 workshop, EU Commission delegates learned from industry experts that repeat infringers are so common that 95% of notices on platforms with notice-and-takedown mechanisms flagged the same content on the same sites, at least regarding intellectual property. Assuming similar rates for reposting content that infringes on personality rights, like reputation, clearing platforms of unlawful content seems like an insurmountable task.

However, overstepping boundaries is a constant risk. Early drafts of Germany’s Network Enforcement Law, which required platforms to prevent re-uploads of content deemed unlawful, highlight this concern. This provision, similar to the one discussed here, was removed from the final law due to fears of excessive blocking and concerns about automated filters’ inability to grasp context and nuances in seemingly similar content, like irony or criticism.

German lawmakers ultimately rejected the requirement, deeming it excessive even within a law criticized for not adequately balancing platform responsibilities and freedom of expression, which highlights the significance of the CJEU’s upcoming decision. A ruling in favor of the plaintiff could revive this provision across Europe.

The concept of platforms monitoring re-uploaded content, gaining momentum in the digital sphere, is influencing content regulation. In SEO, “duplicate content” refers to copied or reused content from other web pages, sometimes legitimate (e.g., a mobile version of a webpage) but often plagiarism. Definitions vary regarding the criteria: while generally considered “identical or virtually identical to existing online content,” Google broadens it to include “appreciably similar” content. Content regulation cannot afford this flexibility when defining “identically worded” content. The Special Rapporteur’s requirement for judicial determination and the E-commerce Directive’s prohibition of general monitoring obligations preclude it.

For copyright protection, service providers like YouTube can automatically scan and compare user-uploaded content to a database of protected works from rights holders. However, for speech infringing on personality rights or other content-based restrictions, analyzing discourse and context is necessary, which essentially requires private intermediaries to determine the legality of speech.

Assessing unlawful speech rarely relies solely on wording; context is crucial. The ECtHR case law shows numerous complex evaluations of local contexts to determine an interference with speech’s justification.

For instance, in Le Pen v. France (2010), the ECtHR stated that while seemingly derogatory toward a minority, certain comments needed to be considered within a country’s ongoing public discourse. They emphasized the domestic courts’ responsibility to determine the scope and terms of national debates when evaluating an interference’s necessity. In Ibragimov v. Russia (2018), the Court highlighted how the definition of attacking religious beliefs varies significantly across locations, with no European standard. Similar to societal political debates, domestic authorities are better positioned to determine acceptable criticism of religions due to their understanding of the local context.

Historical context consistently helps determine the pressing social need for restriction, justifying different decisions regarding seemingly similar speech. For instance, while outlawing Holocaust denial is a legitimate interference in countries where history necessitates proactive measures to address their role in atrocities (see Witzsch v. Germany (no. 1), 1999; Schimanek v. Austria, 2000; Garaudy v. France, 2003), a similar law against Armenian genocide denial would be excessive in a country like Switzerland with no strong connection to the 1915 Ottoman Empire events (Perinçek v. Switzerland, 2015).

The ECtHR’s thorough analysis in Dink v. Turkey (2010) exemplifies the complexities of analyzing language within specific historical and societal contexts. The Court examined expressions bordering on hate speech, like “the purified blood that will replace the blood poisoned by the ‘Turk’ can be found in the noble vein linking Armenians to Armenia” and references to Atatürk’s adopted daughter’s Armenian heritage, found in articles by the late Turkish-Armenian journalist Hrant Dink. They ultimately concluded that Dink wasn’t referring to Turkish blood as poison but criticizing the Armenian diaspora’s campaign methods. The Court heavily relied on the Turkish Court of Cassation’s Principal State Counsel’s assessment, who had analyzed all Dink’s articles from 2003-2004, to determine if those expressions denigrated Turkishness and how the blood and Atatürk’s daughter references could be considered sensitive in Turkish ultranationalist circles, potentially inciting animosity.

Beyond socio-political context, the Court often focuses on culturally specific language use, concluding that words themselves hold less weight than their contextual meaning in determining if they cross into unlawful speech. In Leroy v. France (2008), the Court meticulously assessed the use of “We” and a satirical advertising slogan to determine if a cartoon mocking the 9/11 attacks constituted hate speech.

Beyond the Court’s experience, numerous examples exist of seemingly harmless words that become offensive in certain contexts. For instance, in Southern Slavic-speaking countries, “shiptari” refers to Albanians, especially in Serbia. It carries a derogatory connotation due to its use by Slobodan Milošević to express contempt for the Albanian minority in Yugoslavia. In Greece, the far-right appropriated “lathrometanastes” (“illegal immigrants”) to misrepresent asylum seekers’ and refugees’ legal status, denying them protection and rights. This term, arguably outside acceptable political discourse, is now included in research on intolerant discourse indicators in Europe.

These examples emphasize the importance of understanding language within historical events and social dynamics, as it can convey meanings beyond the surface. While already challenging for domestic and supranational courts, expecting platforms to simply scan for synonyms and phrases is overly simplistic.

Extraterritorial injunctions

This naturally raises questions about the appropriateness of the CJEU ruling in favor of an extraterritorial injunction. The Austrian court’s order targets an entity outside its jurisdiction, aiming to remove content globally from Facebook. The novelty lies not in the Austrian court’s involvement but in the potential global impact of their decision. The question is whether it’s appropriate for the injunction to reach beyond national boundaries, effectively removing content from Facebook worldwide.

The CJEU has interpreted jurisdiction broadly before. In L’Oréal v. eBay (2011), the Court applied EU trademark law because while offered from outside the EU, trademarked goods targeted EU consumers. In Google Spain (2014), the Court applied EU data protection law to an EU citizen’s data processed “in the context of the activities” of an EU establishment, even with the processor in a third country. They argued that limiting de-listings geographically wouldn’t adequately protect data subjects’ rights. This reasoning led to the Schrems (2015) decision, applying EU data protection law to personal data transfers to the US.

The CJEU’s case law suggests that extraterritorial court orders are sometimes necessary to ensure EU law effectiveness and protect citizens’ and businesses’ rights. The Court has granted extraterritorial reach when fundamental rights of EU citizens are at stake (e.g., personal data processing) or when actions abroad directly challenge domestically protected rights like trademarks. It’s unclear if the current case aligns with these circumstances, as limiting political speech requires different considerations.

Politicians are entitled to protect their reputations. However, when criticism involves an ongoing public debate, acceptable speech boundaries expand. Whether speech contributes to a social conversation is context-dependent. Conversations irrelevant or offensive in one national public sphere might be highly relevant elsewhere, especially for minorities or diasporas, potentially infringing on their right to access information.

The CJEU has emphasized connecting factors to justify extraterritorial orders. Following its precedent, they must identify a connecting element justifying a global reach for the Austrian court’s local assessment. A key principle from L’Oréal is that website accessibility alone isn’t enough to establish jurisdiction. National courts must assess this. Except for the ECtHR, which applies a broad jurisdictional approach (Perrin, 2005), most international policymakers (e.g., the UN Special Rapporteur on Freedom of Opinion and Expression, the OSCE Representative on Freedom of the Media) and courts favor a “real and substantial connection” to justify jurisdiction over online content, emphasizing judicial self-restraint.

Regarding personality rights, the CJEU’s 2017 Bolagsupplysningen decision (deviating from established decisions in Shevill, 1995, and eDate, 2011) suggests that when inaccurate online information infringes on these rights, requests for correction or removal are indivisible. Thus, a court with jurisdiction can rule on the entire request.

However, this controversial precedent might not apply here. Similar to previous CJEU decisions, Bolagsupplysningen argues that expansive jurisdiction ensures fundamental rights protection for citizens, preventing their negation by fragmented territorial application. For political speech, where limitations require an overriding public interest justification like public safety, this connecting element becomes less evident. It cannot be presumed that the same speech would be universally inflammatory, considering different social and political contexts. Essentially, public order is best served by geographically sensitive protection.

This relates to assessing a measure’s necessity and proportionality before content removal. Generally, geographical limitation scope is part of the least restrictive means test. In Christians Against Fascism and Racism (1980), the ECtHR argued that even if security concerns outweigh suppressing speech, justifying a ban, its scope should be “narrowly circumscribed in terms of territorial application” to minimize negative effects. Similarly, the 2010 OSCE/ODIHR – Venice Commission Guidelines on Freedom of Peaceful Assembly (cited by the ECtHR in Lashmankin v. Russia, 2017) states that blanket restrictions on assembly locations are generally problematic, as proportionality requires the “least intrusive means.” Therefore, restrictions on assembly locations should be evaluated case-by-case. Applying this to online communication, content removal orders should have a limited geographical reach, proportionate to the protected interest. Therefore, a global injunction to remove commentary on national politics seems unlikely to be considered the least intrusive means.

Conclusions

The CJEU’s upcoming decision has the potential to reshape intermediary liability and redefine acceptable speech boundaries. Requiring platforms to remove “identical or analogous” content means they’ll actively determine third-party content’s legality, except when removing exact copies of content deemed illegal. While re-uploading illegal content is a growing concern, it’s crucial to evaluate this measure’s appropriateness, as solutions from areas like copyright protection might clash with content regulation specifics and infringe on European and international standards protecting online freedom of expression.

Similarly, granting an extraterritorial injunction in this case would align with a recent trend in privacy and data protection. Due to its global reach, the GDPR is becoming a global standard for personal data processing, benefiting EU citizens and protecting their rights. However, applying this logic to legitimate political speech standards is different. It’s debatable whether the EU (or any jurisdiction) should establish a global benchmark for content regulation. Restricting content accessibility beyond the initial dispute’s national borders restricts citizens’ right to information without substantial benefits like protecting public safety for the first state’s citizens.

Barnard & Peers: chapter 9

Photo credit: Slate magazine

* * *

[1] L. Karamanidou (2016) ‘Violence against migrants in Greece: beyond the Golden Dawn’, Ethnic and Racial Studies, 39:11, 2002-2021; D. Skleparis (2016) ‘(In)securitization and illiberal practices on the fringe of the EU’, European Security, 25:1, 92-111.

Licensed under CC BY-NC-SA 4.0