How does the DSA contribute to platform governance and tackle disinformation?

Spotlight

The DSA is an example of possible response to the challenges raised by content moderation and the spread of harmful content such as disinformation. A uniform set of horizontal procedural safeguards will increase the legal protections in the internal market while encouraging platforms to be more transparent and accountable without imposing requirements to generally monitor content.

Facebook Connections

Introduction

The adoption of the Digital Services Act (DSA) has been a welcome landmark step in European digital policy. This new legal instrument is part of a broader European strategy for “Shaping Europe’s digital future.” The proposal for the Digital Markets Act and the Artificial Intelligence Act can be considered as two additional examples of a new phase of European digital policy that aims to address the challenges raised by the consolidation of powers in the algorithmic society.

This reactive and programmatic phase has not always characterized the European approach. Over the last twenty years, the policy of the European Union in the field of digital technologies has shifted from a liberal approach to a constitutional strategy aimed at protecting fundamental rights and democratic values, driven by a trend known as European digital constitutionalism. This change in approach did not occur by chance, but has been driven primarily by the transformation of the digital environment. Since the end of the last century, digital technologies have provided opportunities for the European internal market, encouraging regulators to promote the development of digital products and services. While this liberal scenario has promoted the development of business models promising new opportunities for economic growth, it has also empowered the private sector to consolidate areas of power by learning how to design technological standards and procedures that govern, mass-scale activities, in some cases involving billions of users.

The DSA will also play a paradigmatic role in mitigating the challenges raised by the spread of harmful content and even online disinformation. But, more broadly, it is designed to address the process of content moderation. The case of content moderation is a paradigmatic example of platform governance. Online platforms set the standards and rules governing the flow of online content in their digital spaces. The organization, including the removal of content, is enforced directly by social media companies relying on a mix of algorithmic technologies and human moderators. This private framework of governance also puts platforms in the position of balancing clashing individual rights to decide which right should prevail in each specific case. Therefore, although at first glance social media fosters the sharing of opinions and ideas across social, political and national borders,  platforms also resemble public agencies in their powers of content moderation.

Yet, as private actors, platforms are not required to respect fundamental rights or considering the public interest. Even if multiple incentives drive content moderation, online platforms face the content moderation paradox. On the one hand, they aim to keep digital spaces free from objectionable content like disinformation and hate speech. This strategy aims to encourage users to spend more time online and feed the platform with data which are the primary source for attracting advertising revenues. On the other hand, they prioritize viral content which can create more user engagement such as hate speech and disinformation. This would explain why policing content has not impacted the online distribution of false information, which is implicated in far-reaching political issues such as the US presidential election in 2016 and the genocide in Myanmar. Most recently, the COVID-19 pandemic has shed light on the impact of disinformation, or the infodemic, and the role of algorithmic technologies in moderating online content.

It is not by chance that countries around the world have adopted different approaches to counter disinformation, from the setting up of task forces or expert groups, to criminalizing the spread of false content. In Europe, regulatory attempts to address the circulation of online content, and, more generally, the governance of online platforms, has increased the legal fragmentation in the internal market. Although the Union has taken some steps to deal with issues of content moderation, for instance, by adopting the Copyright Directive and the AVMS Directive, the picture is still dominated by legal fragmentation of guarantees and remedies at the supranational and national level, such as the Network Enforcement Act in Germany and the legislation on disinformation during election periods in France.

Rather than this piecemeal approach, the DSA promises to respond to the challenges raised by the dissemination of (potentially) harmful online content, such as disinformation and hate speech, by providing a supranational and horizontal regime to limit the discretion of online platforms in content moderation. Even if the DSA is just a proposal that will likely be subject to revisions, it promotes a new legal framework limiting platform governance without regulating content but addressing the procedures of content moderation.

Harmonizing Procedures vs. Regulating Content

The DSA aims to modernize the governing online intermediaries while remaining rooted in the safe harbor of the old regime. The DSA promises to maintain the regulatory framework envisaged by the e-Commerce Directive, which introducing a new set of procedures aiming to increase the level of accountability in content moderation. For instance, the DSA introduces due diligence and transparency requirements while providing redress mechanisms for users. In other words, without regulating content, it requires that online platforms comply with procedural safeguards. For instance, it stipulates procedures for the notice of take down and removal of content (Article 14), while also requiring platforms to provide a reason when removing content (Article 15).

However, the DSA does not uniformly apply to all intermediaries. Additional obligations only apply to those platforms qualifying as “very large online platforms,” a criterion based on a threshold estimated at over 45 million service recipients. For such platforms, the proposal sets a higher standard of transparency and accountability for how they moderate content, advertising and algorithmic processes. They are required to develop appropriate tools and resources to mitigate the systemic risks associated with their activities, at least once a year (Article 26), while putting in place reasonable, proportionate and effective mitigation measures (Article 27). Likewise, pursuant to Article 29, very large online platforms will be required to include in their terms and conditions, in a clear, accessible and easily comprehensible manner, the parameters used by recommender systems. These obligations are just some examples limiting platform discretion, pushing these actors to be more transparent and accountable in their process of content moderation.

The limitation of platform power also comes from a new system of independent audit and public enforcement that combines national and EU-level cooperation. Each Member State is required to appoint a Digital Services Coordinator, an independent authority that will be responsible for supervising the intermediary services and imposing fines. Besides, the DSA introduces sanctions of up to 6% of global turnover in the previous year for failure to comply with some of the safeguards and procedures in content moderation.

The DSA stipulations demonstrate that the European Commission is oriented towards a new legal framework for digital services, focusing on regulating the procedures of content moderation. This approach will strengthen the Digital Single Market, while limiting the discretion of platforms in governing online content. The question is whether this framework of safeguards will be effective in the fight against disinformation.

A New Legal Framework to Address Disinformation

Regulating false content online can be elusive not only because of the challenges in defining disinformation, but also since it requires dealing at least with one regulatory dilemma: How and to what extent to regulate (false) speech. This is not a trivial issue for constitutional democracies, which aim to ensure that speech still constitutes one of the primary pillars of democratic society. Moreover, in the algorithmic society, regulating disinformation does not just involve the relationship between public actors and users but also platforms, whose freedom to conduct business could be undermined by obligations to monitor or remove content.

The European Union has been one of the leading actors guiding the policy debate in the field of disinformation. The High-Level Expert Group on Disinformation has lauded the EU’s efforts, describing its approach as “multidimensional.” Building on this framework, the EU Action Plan on Disinformation confirms the multi-pronged strategy of the Union, protecting the public sphere from the rise of extremism, promoting free and fair elections and countering disinformation. This plan outlines the key pillars to fighting disinformation while preserving democratic processes. Recognizing the importance of maintaining trust in the institutions, it discusses improving the capabilities of Union institutions to detect, analyze, and expose disinformation,  strengthen coordinated and joint responses to disinformation, mobilize the private sector to tackle disinformation,  raise awareness and improve societal resilience.

The approach of the Union reflects two distinct but complementary perspectives. The original and main approach has been a focus on the phenomenon by providing policy guidelines and relying on self-regulatory solutions. The Union’s Code of Practice on Disinformation, which preceded the DSA, advanced a self-regulatory approach, pushing social media to voluntarily increase transparency and set other proactive measures to address the spread of false content. Major platforms voluntarily committed to implementing a set of standards to tackle disinformation practices on their platforms. This approach tackled disinformation not by regulating speech but by targeting the dynamics affecting its circulation.

The second focus  has been to increase the degree of transparency and accountability in content moderation. The DSA clearly exemplifies this approach. Particularly, the Democracy Action Plan clarifies the role of the DSA in the fight against disinformation. The plan encourages the European Commission to overhaul the Code of Practice on Disinformation into a co-regulatory framework of obligations and accountability of online platforms, relying on the legal innovations stipulated by the DSA to increase transparency and accountability. In particular, Article 26 of the Code requires further specification of the risks related to disinformation. In this case, the DSA code of conduct could play an important role in tackling the amplification of false news through bots and fake accounts, and might be considered as an appropriate risk-mitigating measure by very large online platforms, even though it has already sparked questions, for example, those raised by the Sounding Board on the Multistakeholder Forum on Disinformation.

Nonetheless, codes of conduct are just a small part of the puzzle. Another important role of the DSA will be to increase transparency in the field of targeted advertising. The DSA recognizes that advertising systems used by very large online platforms pose particular risks relating – but not limited to ­– to the spread of disinformation, with potentially far-reaching impacts in areas as diverse as public health, public security, civil discourse, political participation and equality. With this in consideration, the DSA introduces the obligation for very large online platforms to provide public access to repositories of advertisements (Article 30). This new measure will allow more scrutiny and increase the accountability of these actors, while also providing new opportunities for research.

Yet another important part to the fight against disinformation is the role of trusted flaggers. The DSA requires that online platforms take the necessary technical and organizational measures to ensure that notices submitted by trusted flaggers are processed with priority and without delay (Article 19). This system opens the door to the role of fact-checkers and other civil society organizations to be more involved in the process of content moderation and the reporting of online disinformation.

The DSA also deals with extraordinary circumstances affecting public security and public health. In these cases, the Commission has the power to rely on crisis protocols to coordinate a rapid, collective and cross-border response, especially when online platforms may be misused for the rapid spread of illegal content or disinformation or where the need arises for rapid dissemination of reliable information (Article 37). In these cases, very large online platforms are required to adopt these protocols even if they are applied only temporarily and will not lead platforms to a general and ongoing monitoring obligation of online content.

Conclusions

The DSA is an example of possible response to the challenges raised by content moderation and the spread of harmful content such as disinformation. A uniform set of horizontal procedural safeguards will increase the legal protections in the internal market while encouraging platforms to be more transparent and accountable without imposing  requirements to generally monitor content.

The DSA does not just limit platform power, but also provides a horizontal framework for a series of other measures adopted in recent years, which are instead defined as lex specialis. For instance, the obligations of video-sharing platform with regard to audiovisual content established by Copyright Directive or the AVMS Directive will continue to apply. Likewise, the DSA, when adopted, will not impact on the application of the TERREG. Furthermore, the proposal does not affect the application of the GDPR and other Union regulations on the protection of personal data and confidentiality of communications.

Even though the DSA is still only a proposal, it has already set the tone for European regulation in relation to platform governance. The specific obligations for very large online platforms will play a critical role in limiting platform governance in content moderation. While the Union is at the forefront of a new phase, the western side of the Atlantic has not shown the same concerns; rather, it has followed an opposite path. The US policy is still anchored in a liberal approach, which strongly prioritizes the First Amendment over other considerations. Still, for instance, the Communication Decency Act immunizes online intermediaries, including modern online platforms, from liability for moderating online content.

In Europe, in contrast, the new approach aims to limit platform governance as a form of private power. According to Vestager, “[T]here’s no doubt […] that platforms—and the algorithms they use—can have an enormous impact on the way we see the world around us. And that’s a serious challenge for our democracy. [. . .] So we can’t just leave decisions which affect the future of our democracy to be made in the secrecy of a few corporate boardrooms.” This reactive European approach should not be discounted as a mere turn towards regulatory intervention but seen for what it is – a multidimensional framework to address the complex puzzle of platform governance in the algorithmic society.


The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of  the Heinrich Böll Stiftung Tel Aviv and/or its partners.