"In dealing with disinformation, many different things need to come together"

Interview

A conversation with Dr. Julian Jaursch about the potential of platform regulation in Germany and the EU in addressing disinformation and fostering democratic principles in the digital public sphere.  

Interview with Dr. Julian Jaursch
Teaser Image Caption
Social Media

Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Dr. Julian Jaursch from the Berlin-based think-tank Stiftung Neue Verantwortung (SNV). 

Tell us a bit about yourself, what you do, and how did you become involved with disinformation?

I am a Project Director at SNV, which is a think tank in Berlin that works on a variety of tech policy topics. My work at SNV has focused on ways to address the challenge of online disinformation. It is a complex problem for platform operators, lawmakers, or citizens to address and it is just the tip of the iceberg when it comes to the structural issues in the functioning and management of platforms.

How serious is the threat of disinformation in general? What is the situation like in Germany?

There is a clear fundamental challenge for all democracies as a result of the rapid spread of disinformation online in the sense that if a society lacks a basic shared view of reality, it becomes difficult to engage in a constructive public debate and to come up with sound policies. Climate change is a good example of that; there is overwhelming scientific consensus that the climate crisis is man-made, and that it poses a real threat to humanity. Nevertheless, if large segments of the population question the scientific evidence based on emotions and personal beliefs without providing factual counter-evidence, this does not bode well for the creation of sound policies within a democratic process. Another point, which may seem to contradict the first one, is that another threat to democracy lies in overplaying the gravity of disinformation. That is to say that the constant mentioning of disinformation makes some people fundamentally skeptic, driving distrust, apathy and a general suspicious attitude towards democratic institutions such as the media. It is therefore necessary to walk a tightrope and find a balance between not over-emphasizing the threat of disinformation on the one hand, and having a dismissive attitude towards it by saying "we have always had disinformation, it is not a big deal!" on the other. Finding this balance is crucial.

What is the situation like in Germany?

It is true that progress has been made over the past years when it comes to understanding and addressing disinformation in Germany. Yet, there could and should be a more holistic and strategic approach towards this issue. At the moment there are scattered arenas, where disinformation is addressed, for example in laws, governmental action, and media action, but there is no coordinated plan. To move forward, it is necessary to develop an overarching strategy that includes the different aspects of disinformation in a coordinated fashion, including regulation of platforms, support for independent journalism, and support for news literacy across all ages.

There are a couple of points in the German Criminal Code that address the issue of disinformation, for example in cases of defamation. However, as it is hard to define disinformation legally, the legal reaction has been very mild and cautious so far, which is good. What policymakers have approached with too much caution, in my view, is holding people accountable that disseminate disinformation as well as platforms that allow such practice. It is not just the individual piece of content that is a potential risk, but also the possibility that it will be amplified. The automated amplification is based on personal behavioral data that platforms presumably use to maintain their user base. Content that makes people scared or elicits other strong emotional reactions, such as hatred or outrage, is fast-spreading on platforms. If disinformation is involved in eliciting these responses, platform algorithms do not necessarily care. Looking into this phenomenon and how platforms could be held accountable, as well as creating transparency about how the systems function has been too lax so far. The German government, but also EU policymakers could do more in that regard. In fact, we are already seeing some regulatory efforts at the EU-level, which seem promising, but time is of the essence.

Do you think the pandemic has shifted something in the way that policymakers are looking at disinformation, in terms of regulation and their willingness to go further than before?

It is my hope that the pandemic has made it clear that disinformation can actually harm people. This has long been the case all along, but the pandemic has brought it to the forefront. Another thing that happened following the pandemic and the instigated sense of urgency was that the platforms started to take action. For years, the platforms have argued that there is not much they could do and that they do not see their role as the “arbiters of truth”. But eventually they did take action, and this was because of the pandemic: they started to put warning labels on posts that contained disinformation as well as to add links to credible scientific sources. They worked with fact-checkers and removed harmful content, for example if it encouraged people to drink bleach to cure COVID. This showed that the platforms have ample space for action to mitigate the scope of the problem, which they had not used before.

How do you see things moving forward from here?

There are a couple things happening at the EU-level that are worth noting when it comes to platform regulation. One of them is the Digital Markets Act, or DMA, which addresses competition, antitrust and monopoly with regard to digital platforms. A second important piece of legislation is the Digital Services Act (DSA), which aims to create a safer digital space where the fundamental rights of users are protected. The DSA draft is not perfect, but presents a good first step. The fact that the European Commission came out with it places some of these issues on the political and public agenda and that is a big step forward. I think that the DSA  – if developed properly – is a very good way to create more transparency to hold platforms accountable in a way that goes beyond the removal of content. Needless to say that the removal of illegal content is essential, but there are lots of other things that one can do on top of it to hold platforms accountable. The EU had never done that before, and now they are acting on it, which shows progress.

There are multiple initiatives beyond the DSA package that attempt to address the issue of platform accountability. How should the policy response be orchestrated in your opinion?

From a very idealistic perspective, it should be a global effort, simply because the Internet is global and because platforms such as Facebook and TikTok operate globally. There should be basic human rights standards that apply across national borders. Realistically speaking, making progress at the EU-level would be a good first step, however, also this is not a friction-free process. National legislation concerning the platforms already exists across the EU, especially in Germany, but also in Austria and France, and Poland is considering it as well. National governments are keen on keeping their powers over platforms within their own jurisdiction. For example, in Germany it is possible to see that policymakers are attempting to create national laws and integrate provisions into the DSA while introducing exemptions to the national law. In some ways, it is understandable, however, this approach risks creating a fragmented and weak approach on the EU level. In my opinion, a consistent EU-wide approach would be the better option. The tension and interplay between the national and supranational levels will be one of the determining factors in the success of the regulation attempts over the next couple of months and years: Is platform regulation in the European space going to be based primarily on national laws? Or on European laws? As noted, I believe the better option would be regulation at the EU-level. I have also made the argument for an EU Agency that is independent both of government and of companies. Its role would be to evaluate the suggested transparency and accountability laws, instead of relying on national bodies for this important task. 

What are the biggest challenges right now, where are the points of contention?

When you talk specifically about the Digital Services Act, there are lots of open questions regarding some important details. With a view to helping better understand the spread of disinformation and holding platforms accountable, the rules for data access, for audits and risk assessment are crucial, for instance. It is still open now what these risk assessments should cover, which researchers can get access to platform data and who can and should conduct independent audits of platforms. The matter of how platforms are designed and how this might foster or slow the spread of disinformation also needs to be addressed. More generally, though, as I mentioned before, the best rules on paper will not be of any use, if they are not being enforced well. And there are some big divergences in what some member states, parliamentarians and the European Commission think about the oversight and enforcement mechanism, so that will be interesting to follow as well.

Assuming that these challenges are overcome, what is the potential of regulation in addressing the challenge of disinformation? Even when the DSA is ironed out and passed, will it make it dent in this problem?

Not by itself, and that is an important point to make. You cannot tackle disinformation by just focusing on one action (in this case regulation). I do not think that any single law or regulatory framework could successfully address the challenge of disinformation in its own. In dealing with disinformation, many different things need to come together, like pieces of a puzzle. You need to have accountability rules for platforms, but you also need high quality independent journalism and fact-checking to call out disinformation. It is a multifaceted problem in need of a multifaceted solution. Unfortunately, there is no silver bullet that would make the problem simply go away.


Dr. Julian Jaursch is project director in the field of "Strengthening the Digital Public Sphere" at the Berlin-based think-tank Stiftung Neue Verantwortung (SNV). His work focuses on questions surrounding tackling disinformation and regulating large tech platforms. He analyzes approaches to platform regulation in the German, European and transatlantic context and develops policy recommendations that can be used by political and civil society decision-makers.

The opinions expressed in this text are solely that of the author/s and/or interviewees and do not necessarily reflect the views of  the Heinrich Böll Stiftung Tel Aviv and/or its partners.