AI machines are racist & it’s all our fault - report
For decades many have feared the rise of robots would lead to AI machines dominating the planet and subjugating humans. Turns out they’re not necessarily a threat to mankind’s survival - they’re just racist bigots.
The racist bot phenomenon became glaringly obvious back in March with Microsoft’s latest chatbot named Tay. Launched amid great fanfare, Tay took less than 24 hours to go rogue...or become a Nazi sympathizer to be precise.
The Twitter bot was supposed to learn through engagement and conversation with humans but instead began to aggregate and copy utterances from the more mischievous - and openly hostile - elements lurking online.
Feminists, Jews and Mexicans were caught in Tay’s crosshairs and it also developed something of a potty mouth.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
Now, an AI intelligence system named GloVe is causing quite a commotion.
GloVe is perhaps slightly more subtle in its bigotry than Tay but equally offensive, according to researchers at Princeton University.
To whoever fixed the Tay bot, can you please fix all of the Internet? https://t.co/wXOJtZzn5H#cognitive#Trolls
— Denilson N. (@dnastacio) August 27, 2016
Using GloVe’s algorithm, those involved in the project conducted a word association test, whereby the AI system was asked to match particular words with other ‘pleasant’ or ‘unpleasant’ words.
‘White’ names such as Emily and Matt were paired by GloVe with ‘pleasant’ words containing positive connotations, while Ebony and Jamal - names more associated with the black community - were matched with ‘unpleasant’ words. As for gender, GloVe made some word associations based on traditional roles. Female terms were more likely to be paired with ‘family’ or ‘the arts’ while male terms were matched with ‘career’ or ‘maths’.
But here’s the catch: Although GloVe is “self-learning”, it gathers information by reading text and data from the internet - so its prejudice is basically picked up from us.
“Our results indicate that language itself contains recoverable and accurate imprints of our historic biases...machine learning absorbs prejudice as easily as other biases,” read the researchers’ report, which is awaiting publication.
“We show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language - the same sort of language humans are exposed to every day.”