US tech giants have signed an agreement with the European Commission to tackle the spread of illegal hate speech online that requires them to address complaints within 24 hours.
Facebook, Twitter, Microsoft, and YouTube adopted the code of conduct on Tuesday, committing to crackdown on the use of online hate speech swiftly by putting in place internal procedures to respond to the majority of notifications of abuse within 24 hours and remove the offending content if necessary.
The guidelines are aimed at combatting the use of social media by terrorists, according to a joint statement issued by the EC and the IT companies.
“The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racists use to spread violence and hatred,” said Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality.
The IT companies also agreed to educate their users about the types of content banned under their rules and to encourage them to flag material violating the “code of conduct.”
The measures are based on the Framework Decision on Combating Racism and Xenophobia, which criminalizes public incitement to violence or hatred and forms the legal basis for defining online content as illegal.
The European Court of Human Rights distinguishes between content that “offends, shocks, or disturbs the State or any sector of the population” and content genuinely intended to incite people to violence or hatred.
All of the companies said “hate speech” has no place on their platforms and promised to continue to take measures to combat it.
Twitter’s Head of Public Policy for Europe, Karen White, said the microblogging site remains “committed to letting the Tweets flow”.
“There is a clear distinction between freedom of expression and conduct that incites violence and hate.
“In tandem with actioning hateful conduct that breaches Twitter’s Rules, we also leverage the platform’s incredible capabilities to empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance,” she said.
Microsoft noted that it had recently announced additional steps to specifically prohibit the posting of terrorist content.
Twitter, Facebook, and Google were sued by a French Jewish youth group this month after only a small number of posts containing hate speech that they had flagged were deleted, and only after a lengthy period – an average 15 days in the case of YouTube and Twitter.
However, two civil society groups involved in the discussions announced on Tuesday that they would not take part in future talks after they were excluded from final negotiations.
European Digital Rights (EDRi) and Access Now say they do not have confidence in the “ill considered” code of conduct, saying that there had been a lack of transparency and public input during the creation of the document.
The groups claim that the “‘code of conduct’ downgrades the law to a second-class status, behind the ‘leading role’ of private companies that are being asked to arbitrarily implement their terms of service.”
They have also pointed out the possibility that the agreement could be in breach of the EU Charter of Fundamental Rights.
In addition, concerns are being raised that the agreement could damage freedom of expression by allowing private companies to define what constitutes hate speech and what does not.