A Story of Two Institutions: Regulation of Hate Speech in Europe

Clotilde Pégorier, Lecturer in Law, University of Essex

The regulation of hate speech has recently resurfaced as a crucial legal and political topic in Europe. While the complexity of this issue and the varying views on balancing legislation with rights protection are well-known, the contrasting approaches of the EU and the Council of Europe, two closely linked entities, is notable. This difference raises questions about its underlying reasons and broader legislative ramifications in Europe.

The EU’s Approach to Combatting Online Hate Speech

The EU’s efforts, particularly in tackling online hate speech, illustrate its stance on this issue. Following the Brussels attack on March 22, 2016, and driven by concerns about terrorism and radicalization, the EU intensified its efforts to combat hate speech. This built upon their work initiated in 2008 with the Council Framework Decision 2008/913/JHA, which aimed to criminalize specific forms of racism and xenophobia. As part of its 2015-2020 security strategy, the Commission proposed collaborative actions in seven areas to aid Member States in countering radicalization. Recognizing the internet’s increasing role in spreading ideologies, the European Commission engaged with IT companies to develop legislation to curb the online proliferation of violent content.

This initiative extended the EU’s existing focus on preventing hate speech dissemination through media. Advocate General Yves Bot, in his May 5, 2011, opinion on cases C-244/10 and C-245/10, emphasized that member states must ensure television broadcasts don’t incite hatred based on race, sex, religion, or nationality. This includes broadcasts potentially justifying groups labeled as terrorist organizations by the EU, as they could foster animosity between different communities.

Furthermore, in May 2016, the European Commission and four major internet companies (Twitter, Facebook, YouTube, and Microsoft) signed a voluntary online Code of Conduct to combat illegal hate speech online. While not legally binding, the code signaled these IT companies’ willingness to support the EU’s efforts, partly due to the protections offered by Articles 12 to 14 of the e-Commerce Directive (safe harbor provisions). These articles protect service providers from liability for transmitted information, including hate speech, under certain conditions.

Regardless of their motivations, the IT companies’ cooperation with the EU is significant in the fight against online hate speech. The Code encourages swift action against valid notifications of online hate speech, such as content removal. It also emphasizes enforcing national laws reflecting the Council Framework Decision 2008/913/JHA to curb the spread of illegal hate speech online and offline. This decision allows Member States to implement criminal penalties for publicly condoning, denying, or trivializing genocide, crimes against humanity, and war crimes when directed against a group based on race, color, religion, descent, or national or ethnic origin, especially if likely to incite violence or hatred.

This brings us to a central issue in current hate speech debates: its definition and understanding. The lack of a universally accepted definition, despite its frequent use, poses a challenge. While many states have enacted laws against hate speech, their interpretations of what constitutes such speech vary.

International and national sources offer guidance. The Council of Europe defines hate speech as any expression that incites, promotes, or justifies hatred based on intolerance, including aggressive nationalism, ethnocentrism, and hostility towards minorities and migrants. Article 20, paragraph 2 of the International Covenant on Civil and Political Rights (ICPPR) prohibits advocating national, racial, or religious hatred that incites discrimination, hostility, or violence. The Human Rights Committee’s General Comment No. 34 clarifies this further, specifying that acts prohibited under Article 20 require legal prohibition and must advocate hatred for national, racial, or religious purposes and incite discrimination, hostility, or violence. It emphasizes that “advocacy” implies public expressions intended to provoke action, “hatred” denotes extreme negative emotions toward a group, and “incitement” signifies a likelihood of triggering immediate acts of discrimination, hostility, or violence.

This interpretation provides a comprehensive understanding of hate speech and its harmful potential. Considering the role of online media in disseminating political views and potentially fueling hatred, the EU’s collaboration with IT companies reflects a proactive and valid approach to addressing this modern manifestation of hate speech.

The Council of Europe: Prioritizing Freedom of Expression over Combatting Hate Speech?

The Council of Europe, another significant European body, appears to be more cautious in its approach. The Perinçek case exemplifies this. In October 2015, the European Court of Human Rights (ECtHR) ruled that Switzerland’s criminalization of Doğu Perinçek for genocide denial violated Article 10 of the European Convention on Human Rights (ECHR), stating that the Swiss authorities’ restriction on freedom of expression was disproportionate.

This highlights the diverging views within Europe on hate speech and its relationship with freedom of expression, a fundamental right in both the ECHR and the Charter of Fundamental Rights of the European Union.

The Council of Europe has addressed online hate speech prevention and prohibition since at least 2001, with the adoption of the Convention on Cybercrime and, in 2003, an Additional Protocol criminalizing racist and xenophobic acts committed online. However, in June 2016, concurrent with the EU Code of Conduct, the Council of Europe Secretary General, concerned about internet censorship, stressed the need for transparent and proportionate regulations regarding online content blocking and removal. This followed his report on democracy, human rights, and the rule of law, which identified shortcomings in some states.

The report stated that most member states have adequate legal frameworks for blocking, filtering, and removing online content, aligning with Article 10 of the Convention. However, exceptions exist, particularly concerning laws on hate speech and counter-terrorism. This context sheds light on the ECtHR Grand Chamber’s decision in the Perinçek case, deeming the Swiss criminal provision disproportionate and unnecessary in a democratic society. Yet, Article 261bis of the Swiss penal code clearly outlines penalties for public denigration or discrimination based on race, ethnic origin, or religion, including denial or trivialization of genocide or crimes against humanity. The decision upholding Perinçek’s claim and deeming it a violation of Article 10 ECHR significantly impacted the fight against hate speech at the EU level, as highlighted by several judges in their dissenting opinion.

This suggests a step back by the Council of Europe in addressing hate speech, both online and offline, as its regulations seem inconsistent with the ECHR. This approach contrasts with the EU’s, creating a dilemma for EU members: criminalize online hate speech or prioritize Article 10 of the ECHR. A hypothetical scenario with Switzerland as an EU member illustrates this. By criminalizing genocide denial as hate speech likely to incite violence, Switzerland initially complied with the Framework Council decision but violated Article 10 of the ECHR, leading to its conviction by the ECtHR.

Conclusion

Finding a balance between protecting freedom of expression and penalizing hate speech is challenging. Nonetheless, a more unified European approach is crucial. The EU and the Council of Europe must collaborate more effectively to address online and offline hate speech. The Council of Europe should work closely with the EU, particularly since the Secretary General, in his 2016 report, emphasized the importance of a commitment to the European Convention on Human Rights and the Strasbourg Court, advocating for the integration of fundamental freedoms into national legal and social frameworks.

To foster a consistent European approach, the European Court of Human Rights must be open to permitting greater restrictions on freedom of expression, as pointed out in the dissenting opinion in the Perinçek case. Allowing freedom of expression as a universal defense hinders the fight against online hate speech and the development of a unified European standard. Two steps are essential: First, establish a shared understanding of hate speech, drawing on the interpretation provided by General Comment No. 34, particularly its clarification of “incitement,” “hatred,” and the potential consequences of hate speech beyond physical violence. Second, agree on how such forms of speech threaten democratic values by violating the rights and reputations of others, jeopardizing national security, public order, public health, or morals, and thus constituting legitimate grounds for restricting freedom of expression.

Licensed under CC BY-NC-SA 4.0