Governments and individuals are using social media to promote political propaganda, lies and misinformation to manipulate public opinion around the world, a new set of studies from Oxford University claims.
The Computational Propaganda Research Project, which covers nine nations including Russia, Taiwan, Brazil, Canada, China, Poland, Ukraine and the United States, claims social media is an international battleground for dirty politics.
Twitter and Facebook are “significant platforms for political engagement” and “crucial channels for disseminating news content” where efforts are put into swaying opinion, it says.
The most powerful way to spread propaganda involves the cooperation between “bots and trolls,” according to the study. The most apparent is in China, it says, where the spread of propaganda over social media is “heavily coordinated.”
The study cites an example in Taiwan, where a campaign against President Tsai Ing-wen is said to have used thousands of heavily coordinated accounts to share Chinese mainland propaganda.
One of the techniques used to alter public opinion is to build fake accounts and then automate them to ‘like,’ share and post on the social networks, the study says.
According to the report, these accounts serve to push content onto social feeds to drown out real issues. As the number of likes and shares is large, users tend to believe the content that manipulates their opinion.
It says in Russia, 45 percent of active Twitter accounts are bots.
The study claims Russia first developed its digital propaganda expertise for dealing with internal threats to stability and drowning out dissent.
The researchers say in the US, the propaganda took the form of “manufacturing consensus” - creating the illusion of popularity so that a political candidate can have viability where they might not have had it before.
“The illusion of online support for a candidate can spur actual support through a bandwagon effect,” the US report says.
“[US President Donald] Trump made Twitter center stage in his election, and voters paid attention.”
Researchers found in Germany, fear of online destabilization outpaced the actual arrival of automated political attacks and has led to the proposal and implementation of world-leading laws requiring social networks to take responsibility for what gets posted on their sites.
“Germany leads the way as a cautionary authority over computational propaganda, seeking to prevent online manipulation of opinion rather than addressing already present issues,” the report says.
The report suggests an apparent disinterest among the social media firms in how their networks are being used.
Facebook, for example, leaves most of its anti-propaganda work to external organizations such as Snopes and the Associated Press, which operate semi-autonomous, fact-checking teams aimed at marking viral news stories as true or false.
Twitter’s anti-bot systems are effective at fighting commercial activity on the site, but seem less able or willing to take down automated accounts engaging in political activity, the study says.