Seeking to limit “viral hate speech,” EU and social sites announce Code of Conduct

The European Commission, Facebook, Google (YouTube), Twitter and Microsoft have announced a “Code of Conduct” governing “illegal hate speech” online in the EU. Earlier today, the various involved entities issued a statement explaining the new framework: In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the […]

Chat with MarTechBot

eu-flags-ss-1920

The European Commission, Facebook, Google (YouTube), Twitter and Microsoft have announced a “Code of Conduct” governing “illegal hate speech” online in the EU. Earlier today, the various involved entities issued a statement explaining the new framework:

In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment. While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated.

In principle, this framework is not unlike a copyright-related takedown procedure. Upon notice of the prohibited hate speech, the tech companies mentioned will either remove the targeted content or disable access to it within 24 hours.

Beyond removal, there is also a series of commitments to educate and counter “hateful rhetoric and prejudice” that the various companies have pledged to undertake.

Illegal hate speech is defined under European law as follows:

  • public incitement to violence or hatred directed against a group of persons or a member of such a group defined on the basis of race, colour, descent, religion or belief, or national or ethnic origin . . .
  • publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in the Statute of the International Criminal Court (Articles 6, 7 and 8) and crimes defined in Article 6 of the Charter of the International Military Tribunal, when the conduct is carried out in a manner likely to incite violence or hatred against such a group or a member of such a group.

The historical backdrop of the law is the Holocaust. But multiple events and incidents, most recently a wave of terrorist acts across Europe, feed into the concerns expressed in the Code of Conduct.

A consortium of European civil and human rights groups criticized the process that generated the Code of Conduct, as well as its enforcement mechanisms and substance, calling it a threat to freedom of expression in Europe:

In short, the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Greg Sterling
Contributor
Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.

Get the must-read newsletter for marketers.