A French court has reportedly given Twitter two months to show how it tackles hate speech on its platform, after anti-discrimination groups launched legal action to pressure the site into doing more to remove problematic content.
According to a statement released by the UEJF French Jewish students’ association on Tuesday, the summary judge has granted their request that the social media giant outline how it moderates hateful content.
UEJF, alongside SOS Racisme, Licra, J’accuse, SOS Homophobie and the Mrap, brought the case against Twitter over concerns that the social media giant wasn’t doing enough to address hateful content on its platform. The group described Twitter’s moderation system as having “terrible failings.”
Within two months, Twitter will have to show the court any documents related to its framework of tackling discrimination and hate speech, share the details of its moderation team, including the number, location, nationality and language of its staff, and reveal the number of complaints about objectionable content from French users of the platform, UEJF said.
“By its decision, the French justice shows that the GAFA cannot impose their own law,” UEJF said in a statement, in reference to Google, Amazon, Facebook, and Apple.
Twitter will no longer be able to let hatred spill over its platform with impunity.
Twitter has not yet publicly responded to the statement from UEJF, nor has it addressed the court’s ruling.
Also on rt.com Twitter loses liability protection for user-created posts in India after failing to delete contentA number of European states have sought to hold social media companies more accountable in recent years for hateful content on their platforms. Recently, the UK and the EU have each brought forward new laws that, if passed, will impose fines of up to 10% of a company’s turnover or £18 million ($25 million), whichever is the greater, if they do not address hate speech on their sites. Under the European legislation, social media sites could even find themselves temporarily banned from the EU’s market if they are found to have committed “serious and repeated breaches of law.”
Last year, the European Commission said tech companies were getting better at tackling hate speech, stating that an investigation showed 90% of flagged content was examined within 24 hours – a significant increase from the 40% level in 2016.
If you like this story, share it with a friend!