By Paul De Hert* and Andrés Chomczyk Penedo**
* Professor at Vrije Universiteit Brussel (Belgium) and associate professor at Tilburg University (The Netherlands)
** PhD Researcher at the Law, Science, Technology and Society Research Group, Vrije Universiteit Brussel (Belgium). Marie Skłodowska-Curie fellow at the PROTECT ITN. The author has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497
1. Who is Responsible for Managing Online Misinformation?
The spread of false or misleading information online poses significant challenges in today’s digital world, a concern highlighted by Irene Khan, the UN Special Rapporteur on Freedom of Opinion and Expression. The COVID-19 pandemic brought this issue to the forefront, sparking debates about appropriate responses and revealing disparities in who has a voice in the online sphere. Online platforms and traditional media, often using automated tools, have become gatekeepers in these discussions.
The pandemic fueled polarization, particularly around vaccination strategies and related policies. Social media giants like Facebook, YouTube, and LinkedIn began removing or delaying content deemed harmful to government approaches. This highlights how these platforms have become de facto regulators of online expression, determining what is acceptable within their digital spaces. This raises crucial questions about the categorization of content, particularly what constitutes “illegal” or “harmful” content, and who should be responsible for addressing it. While content moderation is not new, the rise of powerful online platforms necessitates updated regulatory frameworks, leading to discussions around the EU’s Digital Services Act (DSA).
2. Understanding the DSA
The DSA is a key component of the European Commission’s digital agenda for 2019-2024. Along with its counterpart, the Digital Markets Act, it aims to modernize the rules governing online platforms within the EU. Building on the existing e-Commerce Directive, the DSA covers a wide range of issues, including intermediary liability, online safety measures, transparency obligations, and risk management for large platforms. It also clarifies the roles of the European Commission and individual Member States in overseeing these digital spaces. While the DSA’s final text is still under negotiation, its eventual adoption is anticipated in the near future.
3. Mis/Disinformation and the DSA: Grappling with “Illegal Content”
The DSA does not directly use the term “fake news.” It does mention “disinformation,” but this concept, along with “misinformation,” lacks a clear definition within the regulation. While literature often uses these terms interchangeably or draws distinctions based on intent, a universally agreed-upon legal definition remains elusive. This ambiguity is problematic for content moderation. The DSA defines “content moderation” as actions taken by intermediary service providers to identify and address content that violates either the law or their terms of service.
However, the DSA’s definition of “illegal content” relies heavily on existing national laws, which vary significantly across the EU: “‘illegal content’ means any information, which, in itself or by its reference to an activity, including the sale of products or provision of services, is not in compliance with Union law or the law of a Member State, irrespective of the precise subject matter or nature of that law.”
This reliance on national laws creates uncertainty, as what constitutes illegal content, including disinformation, differs across Member States. Furthermore, disinformation not explicitly addressed in national legislation falls into a gray area within the DSA’s framework.
4. The Unresolved Issue of “Harmful Content” in the DSA
Adding to the complexity is the concept of “harmful content,” which the DSA refrains from explicitly defining. The explanatory memorandum accompanying the DSA acknowledges that defining “harmful” content that may not be illegal is a sensitive issue with potential implications for freedom of expression. Discussions about harmful content often revolve around ethical, political, or religious considerations rather than strictly legal ones. This raises questions about the appropriateness of legal intervention in regulating content that, while potentially controversial or upsetting, remains legal.
This lack of clarity regarding “harmful content” creates three distinct content categories: (1) content that is clearly illegal, (2) content whose legality is unclear and may or may not be considered harmful, and (3) content that is legal but potentially harmful. Each of these categories requires a different approach to content moderation.
5. The DSA’s Mechanisms for Addressing Illegal Content
The DSA establishes a comprehensive but intricate system for managing illegal content. As a general principle, intermediary service providers are not obligated to proactively monitor for illegal content. However, this exemption does not prevent them from taking down content voluntarily, and they are required to comply with removal orders from judicial or administrative authorities. In essence, public entities retain the authority to determine what content is deemed illegal and should be removed.
However, the DSA also introduces obligations for specific stakeholders that blur the lines of this state-controlled approach. For example, hosting providers must implement notice and takedown procedures, including expedited processes for trusted flaggers. Additionally, online platforms are required to establish internal complaint-handling systems and participate in out-of-court dispute settlements. These provisions grant a degree of law enforcement and judicial power to private entities, as they can remove content and resolve disputes without direct government oversight.
6. DSA and the Moderation of Legal but Harmful Content
While the DSA avoids defining “harmful content,” it effectively delegates the responsibility for defining and regulating this category to online platforms through their terms of service. This empowers private companies to restrict speech within their digital boundaries, raising concerns about censorship and freedom of expression. Article 12 of the DSA states:
“Providers of intermediary services shall include information on any restrictions that they impose concerning the use of their service in respect of information provided by the recipients of the service, in their terms and conditions. That information shall include information on any policies, procedures, measures, and tools used for content moderation, including algorithmic decision-making and human review. It shall be set out in clear and unambiguous language and shall be publicly available in an easily accessible format.”
This effectively allows platforms to set the rules of engagement within their spaces, acting as lawmakers, enforcers, and adjudicators. They define acceptable content through their terms and conditions, enforce these rules through content moderation, and act as arbiters in disputes arising from content removal. This raises concerns about accountability, transparency, and potential biases in these platforms’ decision-making processes.
7. Privatizing Content Moderation: A Second “Invisible Handshake”
The DSA’s approach to content moderation reflects a broader trend of delegating government-like powers to private entities, particularly in the technology sector. Similar to how financial institutions have become responsible for combating money laundering, online platforms are increasingly tasked with policing their spaces for illegal or harmful content. This shift has been referred to as a new “invisible handshake” between states and Big Tech, akin to the collaboration seen after the 9/11 attacks in the name of national security.
Content moderation presents a significant challenge for governments given the sheer volume of content generated online. Platforms like Facebook have invested heavily in content moderation infrastructure, employing vast teams of human moderators and relying on automated tools. However, outsourcing this responsibility to platforms raises concerns about transparency, accountability, and potential overreach by private companies in shaping online discourse.
8. Implications of the Second Invisible Handshake
The DSA’s reliance on platforms for content moderation has significant implications. It effectively shifts power away from users and towards platforms, which become arbiters of acceptable speech. Platforms, driven by commercial interests and potentially influenced by political pressures, could silence dissenting voices or limit access to information. This raises concerns about censorship, particularly regarding content deemed “harmful” based on subjective criteria.
Furthermore, this approach allows governments to sidestep the complexities and potential political costs associated with making difficult decisions about content regulation. By delegating responsibility to platforms, governments can avoid public scrutiny and accountability for decisions made about online speech. This lack of transparency and democratic oversight over content moderation is a significant concern.
9. A More Democratic Approach to Content Moderation
To address these concerns, alternative approaches to content moderation are needed. One solution is to distinguish between “manifestly illegal content,” which is clearly unlawful and can be removed directly by platforms, and “merely illegal content,” which requires legal interpretation and should be referred to judicial or administrative authorities. This would allow for more efficient content moderation while ensuring appropriate legal oversight.
Another approach is to empower users and communities in the content moderation process. Community-based content moderation, which has been implemented successfully on some platforms, allows users to participate in setting and enforcing community standards for acceptable content. This fosters a sense of ownership and responsibility among users while promoting transparency and accountability.
While not without its challenges, community-based content moderation offers a more democratic and participatory alternative to the current platform-centric model. It recognizes that users are stakeholders in online spaces and should have a voice in shaping the rules of engagement. Furthermore, it encourages dialogue and deliberation within communities about acceptable content, fostering a more nuanced and context-specific approach to content moderation.
Moving Forward: Transparency, Accountability, and User Empowerment
Addressing the challenges of online content moderation requires a multi-faceted approach that balances freedom of expression with the need to mitigate harm. The DSA, while well-intentioned, raises concerns about transparency, accountability, and the potential for censorship. Moving forward, it is crucial to:
- Clearly define “illegal” and “harmful” content: Vague definitions give platforms too much discretion and create uncertainty for users.
- Strengthen judicial oversight: Establish efficient mechanisms for legal review of content moderation decisions, particularly regarding “merely illegal content.”
- Promote community-based moderation: Empower users to participate in setting and enforcing community standards for acceptable content.
- Ensure transparency and accountability: Platforms should be required to publish regular transparency reports detailing their content moderation practices and the impact of their policies.
- Invest in media literacy: Educate users about online disinformation and equip them with the critical thinking skills to navigate the digital world responsibly.
The digital landscape is constantly evolving, and content moderation will remain a complex and challenging issue. By prioritizing transparency, accountability, and user empowerment, we can create a more democratic and inclusive online environment that fosters freedom of expression while mitigating harm.