The EU has launched a comprehensive Action Plan against Disinformation. Its purpose, according to a recent press release from the European Commission, is apparently to "protect its democratic systems and public debates and in view of the 2019 European elections as well as a number of national and local elections that will be held in Member States by 2020".
In June 2018, leaders of EU member states had met in the European Council and invited the European Commission "to present... an action plan by December 2018 with specific proposals for a coordinated EU response to the challenge of disinformation..." It is this action plan that the Commission presented to the public on December 5.
The Action Plan focuses on four areas:
Improved detection of disinformation (the European Commission dedicated 5 million euros toward this project and seemingly expects Member states to contribute on a national level, as well).
Coordinated Response -- the EU institutions and Member States will set up a Rapid Alert System "to facilitate the sharing of data and assessments of disinformation campaigns". The Rapid Alert System will be set up by March 2019 and "will be complemented by further strengthening relevant resources".
Online platforms and industry are called on to ensure "transparency of political advertising, stepping up efforts to close active fake accounts, labelling non-human interactions (messages spread automatically by 'bots') and cooperating with fact-checkers and academic researchers to detect disinformation campaigns and make fact-checked content more visible and widespread" in accordance with a previously signed Code of Practice against Disinformation.
Raising awareness and empowering citizens: In addition to "targeted awareness campaigns", the "EU institutions and Member States will promote media literacy through dedicated programmes. Support will be provided to national multidisciplinary teams of independent fact-checkers and researchers to detect and expose disinformation campaigns across social networks". In 2018, citizens are suddenly no longer "media literate" and need to be "empowered" in order to be told how and what to think.
Crucially, and as mentioned above, the Action Plan relies on the previously introduced, Code of Practice on Disinformation, which the online tech giants -- Facebook, Google, Twitter and Mozilla -- signed in October 2018. The Code of Practice is necessary, because, according to EU Commissioner for the Security Union Sir Julian King:
"The weaponisation of on-line fake news and disinformation poses a serious security threat to our societies. The subversion of trusted channels to peddle pernicious and divisive content requires a clear-eyed response based on increased transparency, traceability and accountability. Internet platforms have a vital role to play in countering the abuse of their infrastructure by hostile actors and in keeping their users, and society, safe."
In September, Commissioner for Digital Economy and Society Mariya Gabriel said about the Code of Practice:
"This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis. The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this. These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.
"The Code of Practice should contribute to a transparent, fair and trustworthy online campaign ahead of the European elections in spring 2019, while fully respecting Europe's fundamental principles of freedom of expression, a free press and pluralism."
According to Andrus Ansip, Vice-President responsible for the Digital Single Market, the Code of Practice and the Action Plan against Disinformation are meant "to protect our democracies against disinformation. We have seen attempts to interfere in elections and referenda, with evidence pointing to Russia as a primary source of these campaigns."
EU foreign policy chief Federica Mogherini stated: "It's our duty to protect this space and not allow anybody to spread disinformation that fuels hatred, division, and mistrust in democracy."
It sounds noble: The EU wants to protect citizens from "fake news" and from the interference in national and European democratic processes by foreign powers such as Russia.
The problem is that this professedly noble initiative comes from an organization that has already for several years been censoring speech in Europe, thereby making it difficult to take these stated intentions at face value. This is, after all, the European Commission that in May 2016 agreed with Facebook, Twitter, YouTube, and Microsoft, on a "Code of Conduct on countering illegal online hate speech online" (Google+ and Instagram also joined the Code of Conduct in January 2018).
The Code of Conduct commits the social media companies to review and remove, within 24 hours, "illegal hate speech". According to the Code of Conduct, when companies receive a request to remove content, they must "assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia..." In other words, the social media giants act as voluntary censors on behalf of the European Union.
In addition to the Code of Conduct, the EU hosts several initiatives aimed at increasing censorship. Recently, for example, the EU had a call out for research proposals on how "to monitor, prevent and counter hate speech online". It also sponsors projects that "guide" journalists on what to write: Under the EU's Rights, Equality and Citizenship Programme (REC) the EU has financed the publication of a handbook with guidelines for journalists on how to write about migrants and migration. The guidelines form part of the RESPECT WORDS project -- also financed by the EU -- which "aims to promote quality reporting on migrants and ethnic and religious minorities as an indispensable tool in the fight against hate". The handbook guidelines state, among other things, that journalists should:
"Take care not to further stigmatise terms such as 'Muslim' or 'Islam' by associating them with particular acts... Don't allow extremists' claims about acting 'in the name of Islam' to stand unchallenged. Highlight... the diversity of Muslim communities... where it is necessary and newsworthy to report hateful comments against Muslims, mediate the information. Challenge any false premises on which such comments rely".
In other words, the guidelines ask journalists to disinform the public. How, then, should one logically respond to an entire EU-sponsored "Action Plan against Disinformation"?
Finally, this is the same European Commission that most recently expressed its disapproval of the withdrawal of Austria from the UN's "Global Compact for Safe, Orderly and Regular Migration." The Compact stipulates that media outlets that do not support the UN's migration agenda will not be eligible for public funding. How is that for "fully respecting Europe's fundamental principles of freedom of expression, a free press and pluralism"?
What Europe should expect, as this new Action Plan against Disinformation is rolled out, is, in fact -- more censorship.
John Richardson is a researcher based in the United States.