icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
30 Mar, 2018 14:41

Facebook teams up with ‘third-party fact checkers,’ and guess who they are

Facebook teams up with ‘third-party fact checkers,’ and guess who they are

Facebook has announced that it will step up its efforts to filter “fake news” from its platform. The company will partner with “third-party fact checkers,” including AP and AFP, to verify news, photos, and videos.

The company’s fact-checking measures come as part of an overall strategy aimed at stopping the spread of content that could influence elections. This content was spread by foreign actors to “divide Americans, and to spread fear, uncertainty and doubt” in the 2016 election, claimed Guy Rosen, Facebook VP of content management.

“We’re trying to develop a systematic and comprehensive approach to tackle these challenges, and then to map that approach to the needs of each country or election,” said Alex Stamos, Facebook’s chief security officer.

Whereas Facebook previously waited for reports from users before reviewing content, it will now play the role of policeman and use machine-learning technology to proactively filter out content it deems harmful to election integrity. This approach was first rolled out last year at the time of the Alabama special Senate race between Democrat Doug Jones and Republican Roy Moore. Facebook now hopes to take this approach in the run-up to the 2018 midterm elections.

In addition, the company announced yesterday that its army of fact checkers will be drawn from the Associated Press, AFP, and other organizations that follow Poynter’s “fact checking.”

During the 2016 election campaign, these fact-checking mercenaries could only review articles that were flagged by Facebook users. As of yesterday, they have been given authority to seek out and flag content they deem “suspicious.”

“To reduce latency in advance of elections, we wanted to ensure we gave fact checkers that ability,” said Facebook’s News Feed Product Manager Tessa Lyons.

This content includes not just news articles and blog posts, but photos and videos as well. Photos and videos, Facebook argues, can influence discourse without needing to be clicked. This was seen recently, as a doctored image that showed Parkland shooting survivor and activist Emma Gonzalez ripping up the constitution did the rounds on social media.

Once an article has been flagged as “false,” its distribution on the News Feed is dropped significantly. People who have shared the story are notified, as are people who attempt to share it from then on. People who still come across the story are shown verified articles alongside it, and the information gleaned from the whole process is used to train Facebook’s machine-learning model to better filter news in the future.

If that all sounds a little scary, that’s because it is. Facebook’s clampdown on “fake news” is four-pronged. It targets actors that impersonate others; tricks used to artificially expand the audience for a particular message; the assertion of false information; and the spreading of false narratives.

The last target is the troubling one. Facebook describes “false narratives” as “intentionally divisive headlines and language that exploit disagreements and sow conflict.” Facebook’s own team are unsure what this means.

“This is the most difficult area for us, as different news outlets and consumers can have completely different [views] on what an appropriate narrative is, even if they agree on the facts,” Chief Security Officer Alex Stamos said.

Stamos himself is currently seeing out the last few months of his tenure as CSO. Last year, he spoke about about the dangers of filtering news in a series of tweets. Stamos warned that fake-news filtering would lead to Facebook “Becoming the Ministry of Truth with Machine Learning systems trained on your personal biases.”

Stamos ended his tweetstorm with the ominous sounding: “A lot of people aren’t thinking hard about the world they are asking Silicon Valley to build. When the gods wish to punish us they answer our prayers.”

For now, watch out. If an article is factually correct, but contains the kind of truth Facebook doesn’t want you to see, don’t expect to see it in your news feed any time soon.

Podcasts
0:00
25:36
0:00
26:25