icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
18 Mar, 2017 13:09

Google pledges ad policy change after backlash over adverts shown near extremist content

Google pledges ad policy change after backlash over adverts shown near extremist content

Google is promising to change its advertisement placement policies after several high-profile customers, including London authorities, discovered that their ads had appeared alongside extremist content and pulled them.

“We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content,” Google’s UK managing director, Ronan Harris, wrote on a corporate blog on Friday.

“While we have a wide variety of tools to give advertisers and agencies control over where their ads appear… we can do a better job of addressing the small number of inappropriately monetized videos and content. We’ve begun a thorough review of our ads policies and brand controls, and we will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network,” he promised

The pledge comes after several prominent advertisers announced that they would suspend online advertising with Google because of the risk that their ads could appear alongside inappropriate content.

The biggest is Havas of France, the world’s sixth largest marketing services group, which said Google had been “unable to provide specific reassurances, policy and guarantees that their video or display content is classified either quickly enough or with the correct filters.” The BBC, the Guardian, and Transport for London took similar steps.

The pullout comes after an expose in the Times revealed that Google had displayed the ads on some extremist sites, as well as alongside extremist videos posted on YouTube, the tech giant’s video service. The British government has summoned Google executives to explain why taxpayer money ended up funding publishers of inappropriate content via the ads.

In his blog post, Harris argued that Google’s automated algorithms fail to prevent such incidents only in rare cases.

READ MORE: Facebook, Twitter could pay $53 mn for hate speech, fake news in Germany

“With millions of sites in our network and 400 hours of video uploaded to YouTube every minute, we recognize that we don’t always get it right,” he said, while noting “in a very small percentage of cases, ads appear against content that violates our monetization policies. We promptly remove the ads in those instances, but we know we can and must do more.”

Britain is not the only nation where tech giants face criticism for inadequately policing their content. Germany announced plans this week to pass a law that would require social networks to publish quarterly accountability reports on how they are fighting hate speech and fake news. Companies could also be fined up to €50 million ($53 million) for failing to remove inappropriate content within 24 hours after a complaint has been filed.

Podcasts
0:00
25:33
0:00
14:54