Perspective or censorship? Google shares AI designed to fight online trolling
Google has released to developers the source code to Perspective, a new machine tool designed to flag “toxic comments” online. Its creators hope the AI will clean up internet debate, but critics fear it will lead to censorship instead.
Perspective was created by Jigsaw and Google’s Counter Abuse Technology team – both subsidiaries of Google’s parent company Alphabet – in a collaborative research project called Conversation-AI. Its mission is to build technology to deal with problems ranging from “online censorship to countering violent extremism to protecting people from online harassment.”
Jigsaw has partnered with online communities and publishers to measure the “toxicity” of comments, including the New York Times, Wikipedia, Guardian and the Economist.
“This gives them (news sites and social media) a new option: Take a bunch of collective intelligence – that will keep getting better over time – about what toxic comments people have said would make them leave, and use that information to help your community discussions,” said CJ Adams, product manager of Google’s Conversation AI, according to WIRED.
Until now, for news sites and social media trying to rein in comments “the options have been upvotes, downvotes, turning off comments altogether or manually moderating,” Adams said.
Twitter and Facebook also have recently announced anti-trolling moves.
On a demonstration website launched Thursday, anyone could type a phrase into Perspective’s interface to instantaneously see how it rates on the “toxicity” scale.
RT America tested the AI with some comments from our own website. Type “he is a Communist with a Jew nose” into its text field, and Perspective will tell you it has a 77 percent similarity to phases people consider toxic. Write “I piss on Confederate graves; I wholly agree with your views of these fellows” and Perspective will flag it as 42 percent toxic, while “Please RT no more Libtards” gets a 33 percent rating.
Google introduces Perspective, a machine learning initiative to help police comments https://t.co/lQuia89Hb8pic.twitter.com/mn2m9RhYlL
— Android Police (@AndroidPolice) February 23, 2017
Jigsaw developed the “troll detector” by taking millions of comments from Wikipedia editorial discussions, The New York Times and other unnamed partners. The comments were shared with ten people recruited online to state whether they found the comments toxic. The resulting judgements provided a large data set of training examples to teach the AI.
“Ultimately we want the AI to surface the toxic stuff to us faster,” Denise Law, the Economist’s community editor, told WIRED. “If we can remove that, what we’d have left is all the really nice comments. We’d create a safe space where everyone can have intelligent debates.”
Jared Cohen, Jigsaw’s founder and president, said the tool is just one step toward better conversations, and he hopes it will be created in other languages to counter state-sponsored use of abusive trolling as a censorship tactic.
"Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments," Cohen wrote in a blog post.
Not everyone thinks Perspective is wonderful, however. Libertarian journalist Virgil Vaduva ran his own experiment on Perspective, and concluded that the AI “can easily be used to censor controversial speech, whether that speech comes from the left or the right of the American political spectrum.”
Applying the AI to censor comments “will create an environment empty of value… where everyone agrees with everyone, or so it may appear,” Vaduva wrote.