Up to 10,000 racist tweets are sent every day, a new study has revealed. Researchers came to the conclusion after analyzing some 126,975 English-language tweets from across the globe over a nine-day period.
Roughly 160 million tweets were being sent in English per day at
the time of the research being undertaken. In total, roughly
14,000 of those contained at least one of the slurs searched for,
research by the UK's leading cross-party think-tank, Demos,
showed.
"The medium of Twitter provides an unprecedented source of
data for studying slurs," researchers emphasized in their
report, with the ten most common terms found in the data set (in
order of prevalence) ‘white boy’, ‘Paki’, ‘whitey’, ‘pikey’,
‘nigga’, ‘spic’, ‘crow’, ‘squinty’ and ‘wigga’.
According to the Anti-Social Media study, the top five slurs
account for over 75 percent of relevant tweets. The most
prevalent term in the data set, however, was ‘white boy’,
appearing in 49 percent of tweets.
The researchers have also stated that of the 10,000 tweets
employing racial and/or ethnic slurs every day, 7,000 are
employing them in a non-derogatory fashion.
Slurs are most commonly used in a non-offensive, non-abusive
manner: Between 50 and 70 percent of tweets were used to express
in-group solidarity or non-derogatory description.
"If casual use of slur terms is included in the human
analysis (as in ‘pikey’ being interchangeable with ‘West Ham
supporter’), the proportion rises to about 50 percent,"the report stated.
The researchers estimated a prevalence of about one in 55,000
tweets in English that are indicative of racial/ethnic prejudice
on the part of the sender.
"Of these apparently racially- or ethnically-prejudiced
tweets, manual classification of 500 tweets sampled randomly from
this group suggested that around 30 percent show casual use of
slur terms, with the balance of tweets making comments that are
more directly racially or ethnically prejudicial," the
analysis showed.
"This suggests a prevalence of directed racially or
ethnically prejudicial tweets of about one in 75,000 tweets in
the English language," Demos' report stated, adding that
according their estimate, at the very most, fewer than 100 tweets
are sent each day which might be interpreted as threatening any
kind of violence or offline action.
"Targeted abuse and specific threats of violence are violations
of our rules, and users can report this type of content from
within the Twitter application or at this link on our
website," a Twitter spokesperson told MailOnline.
According to the Anti-Social Media study, a number of people use
Twitter for one-to-one communication, notwithstanding the fact
that the communication is technically viewable by all. As a
result, many essentially private conversations are enacted over
the popular social network.
"Our working hypothesis is that many strongly invective messages
containing racially insulting phraseology are ‘snapshots’ from
essentially private arguments." Therefore, forming a
judgment over whether a message reflects an insult in the heat of
a personal argument or a more ‘considered’ racist attack requires
more context than is typically available from a single tweet,
researchers point out.
The topic of offensive language was last debated in the UK in
January, when three men were charged with racial aggravation in
connection with chanting the word ‘Yid’, meaning Jew, at two
football matches. The word was allegedly used at Tottenham
Hotspur matches against FC Sheriff and West Ham United; fans have
ignored police warnings to stop using the word ‘Yid’ in their
chants.