On November 15, 2020, European lawmakers proposed the Digital Services Act (DSA) as well as the Digital Marketplace Act (DMA). They aim to establish provisions that ensure a safe online environment and a fair and open market between European members States. On 22 April 2022, the 27 EU countries adopted a common position on approving the broad lines of the Digital Service Act.
The DSA aims to complete the E-commerce directive 2000/31264. The E-Commerce Directive was enacted in 2001 and had the objective to frame the internet. In 2000 the digital world was in process and most of the online platforms that we know today were not yet born. Indeed, in 2000, Mark Zuckenberg was a teenager and Google was still in its garage. But year after year, new online platforms appeared and revolutionised the internet. However, these new platforms created new legal situations that were not foreseeable, and it turned out that the digital changes led to undesirable effects that the E-commerce directive was not every time able to tackle. In the case of disinformation, the E-Commerce Directive was unable to provide meaningful measures to curb its spread, mainly because it focused on illegal content, whereas disinformation is not always illegal. For all these reasons, the DSA appeared.
As far as disinformation is concerned, the DSA gives hope to tackling this phenomenon. Disinformation is a major problem that appeared within the last years on the internet. Sometimes used to manipulate votes or sometimes used to make money with advertisement, its tackling became a necessity.
The Digital Service Act, the first time that European legislators address online disinformation directly via a regulation.
Although the DSA is a lengthy piece of legislation of 113 pages, the Directive only mentions the term ‘disinformation’ 4 times which may raise doubts as to whether the Directive correctly addressed the issue of disinformation.
But when analysing in-depth some passages of the Directive, we observe that disinformation is addressed within chapter 3 section 4, under the phenomenon of « systemic risk ». What the DSA means by “systemic risk”, is intentional manipulation which refers to disinformation.
Regarding systemic risk, very large online platforms such as Facebook or Twitter, are required to take proportionate, reasonable and effective measures to curb its spread. In other words, they have to establish a content moderation against manipulated content. Content moderation can take various forms such as downgrading the false information in search results and news feeds or limiting targeted advertising that is likely to contribute to the growth of the risk. To respect at best freedom of expression, the DSA suggests a due process in content moderation which is noteworthy as it diminishes unfair decisions. Due process is the fact that users can contest a decision made by a platform. Still, if users consider that their rights have not been respected, they have the possibility to contact the Digital Service coordinator to enforce the respect of their rights. The Digital Service Coordinator is an independent authority to be created by the DSA. Its role is to monitor the compliance of the online platform with the DSA and to impose sanctions in case of non-respect.
Another interesting measure implemented by the DSA can be found under article 27. Article 27 suggests a close collaboration between online platforms and “trusted flaggers”. The DSA defines a “trusted flagger” as an independent person representing collective interests, qualified to detect illegal online content. The European legislators want them to collaborate to spot illegal disinformation.
Finally, article 37 of the DSA establishes a crisis protocol that may be issued at the instigation of the European Commission in the event of extraordinary circumstances that directly affect public health or public security. With the recent political and health events in Europe, European legislators have become aware of the importance and sometimes destabilising role those online platforms can play in the dissemination of information in crisis management. These exceptional measures may, for example, be the promotion of authoritative information from the Member States and the European Union to dilute disinformation.
Although pioneering in the fight against disinformation, the Digital Service Act has its shortcomings.
As we just saw, the DSA proposes interesting measures to tackle disinformation. However, most of all are only mandatory for online platforms with more than 45 million European users. No solutions have been proposed for small and medium-sized online platforms. In doing so, European legislators just regulated one side of disinformation. We could, therefore, fear a shift in the dissemination of disinformation on those platforms.
Another issue we might raise is that the DSA only suggests and does not require, systematic collaboration with trusted flaggers. It may therefore minimize the effect of this measure. In light of content moderation, the DSA forgot to address the use of algorithms to detect and remove disinformation. This is a concern, as it may increase the error rate and impinge on the freedom of expression.
Also, some measures proposed by the European legislator lack some precision. It is mentioned that very large online platforms to tackle disinformation should adopt « reasonable, proportionate and effective » measures, however, it does not say how to reach this expectation. Finding fair and effective measures is in fact one of the most difficult tasks to achieve.
Another criticism of the DSA is that it places too little emphasis on user empowerment. The only strategy taken to empower users in disinformation is the transparency of online platforms. It is a good point but that is not enough. It is quite surprising because, for illegal content, they proposed interesting tools to empower users to disseminate illegal content such as the opportunity to flag them. Perhaps, this could have been applied to disinformation too.
Finally, the access to transparency could have been made easier. The way the DSA requires transparency is not so easily accessible to users. It mentions that users must investigate through either reports made by the platforms or its terms and conditions. For a person not willing to understand how disinformation plays, nothing is made to encourage him/her. For instance, could we hope that a teenager that uses Tik Tok will read the terms and conditions or worst, will go through the application programming interface to understand how he could be manipulated?
Finding the right measures to fight disinformation is hard and full of pitfalls. We can welcome that the European legislators took the step to create a Directive adapted to our online world. But there are still improvements to make. Whether the European Commission and the Member States will overcome these shortcomings remains to be seen and only time will tell us.
Cailin Van der zijden.
Sources:
https://ec.europa.eu/commission/presscorner/detail/en/IP_22_2545
https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0825&from=en