On March 1, The European Commission -- the unelected executive branch of the European Union -- told social media companies to remove illegal online terrorist content within an hour, or risk facing EU-wide legislation on the topic. The ultimatum was part of a new set of recommendations that will apply to all forms of "illegal content" online, "from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement."
The European Commission said, "Considering that terrorist content is most harmful in the first hours of its appearance online, all companies should remove such content within one hour from its referral as a general rule".
While the one-hour ultimatum is ostensibly only about terrorist content, this is how the European Commission motivated the new recommendations:
"The Juncker Commission made security a top priority from day one. It is the most basic and universal of rights to feel safe in your own home or when walking down the street. Europeans rightly expect their Union to provide that security for them – online and offline. The Commission has taken a number of actions to protect Europeans online – be it from terrorist content, illegal hate speech or fake news... we are continuously looking into ways we can improve our fight against illegal content online. Illegal content means any information which is not in compliance with Union law or the law of a Member State, such as content inciting people to terrorism, racist or xenophobic, illegal hate speech, child sexual exploitation... What is illegal offline is also illegal online".
"Illegal hate speech", is broadly defined by the European Commission as "incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin".
The internet companies have three months to deliver results and the European Commission will then decide whether it will introduce legislation. Incidentally, the three-month deadline, in May 2018, coincides with the deadline that the European Commission gave itself in 2017 on deciding whether the "Code of Conduct on countering illegal online hate speech" should be made into legislation.
In May 2016, the European Commission and Facebook, Twitter, YouTube, and Microsoft, agreed on a "Code of Conduct on countering illegal online hate speech" (Google+ and Instagram joined the Code of Conduct in January 2018). The Code of Conduct commits the social media companies to review and remove within 24 hours content that is deemed to be, "illegal hate speech". According to the Code of Conduct, when companies receive a request to remove content, they must "assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia..." In other words, the social media giants act as voluntary censors on behalf of the European Union.
The European Commission has been regularly monitoring the implementation of the Code of Conduct. It recently found that "Under the Code of Conduct on Countering Illegal Hate Speech Online, internet companies now remove on average 70% of illegal hate speech notified to them and in more than 80% of these cases, the removals took place within 24 hours".
The European Commission's announcement on the new recommendations, specifically the one-hour rule, was heavily criticized. EDiMA, an industry association that includes Facebook, YouTube, Google and Twitter, said it was "dismayed" by the Commission's announcement:
Our sector accepts the urgency but needs to balance the responsibility to protect users while upholding fundamental rights -- a one-hour turn-around time in such cases could harm the effectiveness of service providers' take-down systems rather than help... EDiMA fails to see how the arbitrary Recommendation published by the European Commission, without due consideration of the types of content; the context and impact of the obligation on other regulatory issues; and, the feasibility of applying such broad recommendations by different kinds of service providers can be seen as a positive step forward.
Joe McNamee, executive director of European Digital Rights, described the Commission's proposal as "voluntary censorship":
"Today's recommendation institutionalizes a role for Facebook and Google in regulating the free speech of Europeans," he said in a statement. "The Commission needs to be smart and to finally start developing policy based on reliable data and not public relations spin."
Facebook, on the other hand, said that it shares the European Commission's goal:
"We have already made good progress removing various forms of illegal content," the company said in a statement. "We continue to work hard to remove hate speech and terrorist content while making sure that Facebook remains a platform for all ideas."
There appears to be a huge disconnect here between the EU's professed concern for keeping Europeans safe -- as expressed in the one hour rule -- and the EU's actual refusal to keep Europeans safe in the offline world. The result is that Europeans, manipulated by an untransparent, unaccountable body, will not be kept safe either online or off.
Only a few months ago, EU's Commissioner for Migration, Home Affairs and Citizenship, Dimitris Avramopoulos wrote, "We cannot and will never be able to stop migration... At the end of the day, we all need to be ready to accept migration, mobility and diversity as the new norm and tailor our policies accordingly".
The enormous influx of migrants into the EU, especially since 2015, is closely linked to the spike in terrorism, as well as the current and future Islamization of the continent. ISIS terrorists have returned to Europe or entered the continent disguised as migrants, and several have perpetrated terror attacks. According to Gilles de Kerchove, EU Counterterrorism Coordinator, there are now more than 50,000 jihadists living in Europe. In 2017, one terrorist attack was attempted every seven days in Europe, on average. When Jean-Claude Juncker, President of the European Commission, gave his State of the Union Address to the European Parliament in September 2017, he admitted a hugely embarrassing fact:
"We still lack the means to act quickly in case of cross-border terrorist threats. This is why I call for a European intelligence unit that ensures data concerning terrorists and foreign fighters are automatically shared among intelligence services and with the police".
After the ISIS attacks in Paris in November 2015, the Brussels attacks in March 2016, the Nice attack in July 2016, the Berlin Christmas Market attack in December 2016, and the Manchester attack in May 2017 -- and those are just the most spectacular ones -- should the "intelligence unit" for which Juncker calls not have been the very highest priority for the European Commission? After all, it claims that security is its "top priority". Yet, Europeans are supposed to believe that removing "terrorist content" within one hour is going to protect them against future terrorist attacks?
Moreover, as long as you are claiming that security is a "top priority", if President Juncker so readily admits to lacking "the means to act quickly in case of cross-border terrorist threats", would the logical consequence not be to close those borders, at least until you have acquired those means?
European Commission President Jean-Claude Juncker. (Photo by Sean Gallup/Getty Images) |
European intelligence authorities have repeatedly stated that with the ongoing migration, Europe is "... importing Islamic extremism, Arab anti-Semitism, national and ethnic conflicts of other peoples, as well as a different understanding of society and law".
These are all factors contributing to the current spikes not only in the terror threat to Europe, but also in the crime waves sweeping countries such as Sweden and Germany, including the surge in rapes.
Regardless of these facts, including that women can no longer exercise their freedom to walk in safety in many neighborhoods of European cities, the EU has staunchly refused to stop the influx of migrants. It is, therefore, difficult to take seriously in any way the European Commission's claim that the security, offline and online, of EU citizens is a "top priority". If that were true, why does not Europe simply close the borders? Stopping terrorists at the borders would be infinitely more efficient at reducing the terrorist threat than requiring tech companies to remove "Illegal online content". Instead, the EU actually sues EU countries -- Poland, Hungary and the Czech Republic -- who refuse to endanger their citizens by admitting the quota of migrants that the EU assigns for them.
These EU ultimatums also fail to take into account what a recent study showed: that the second most important factor in the radicalization of Muslims, after Islam itself, is the environment, namely the mosques and imams to which Muslims go and on which they rely. Although the internet evidently does play a role in the radicalization process, the study showed that face-to-face encounters were more important, and that dawa, proselytizing Islam, plays a central role in this process. Perhaps the EU should obsess less over inconsequential time frames -- last year the European Commission talked about a two-hour time frame for removal -- and worry more about what is being preached inside the thousands of mosques scattered around its membership countries, so many of them financed by Saudi Arabia and Qatar?
Recent experience with Germany's censorship law shows that a company is likely to err on the side of caution -- censorship. And what if the content in question, as has already occurred may be trying to warn the public about terrorism?
Above all, the one-hour rule, with the threat of legislation behind it, looks more like a diversion created for public relations and for sneaking even more authoritarian censorship -- plus the ignorance that goes with it -- into the lives of its EU citizens.
John Richardson is a researcher based in the United States.