icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
30 Aug, 2016 13:13

AI machines are racist & it’s all our fault - report

AI machines are racist & it’s all our fault - report

For decades many have feared the rise of robots would lead to AI machines dominating the planet and subjugating humans. Turns out they’re not necessarily a threat to mankind’s survival - they’re just racist bigots.

The racist bot phenomenon became glaringly obvious back in March with Microsoft’s latest chatbot named Tay. Launched amid great fanfare, Tay took less than 24 hours to go rogue...or become a Nazi sympathizer to be precise.

READ MORE: Botty-mouth: Microsoft forced to apologize for chatbot's racist, sexist & anti-semitic rants

The Twitter bot was supposed to learn through engagement and conversation with humans but instead began to aggregate and copy utterances from the more mischievous - and openly hostile - elements lurking online.

Feminists, Jews and Mexicans were caught in Tay’s crosshairs and it also developed something of a potty mouth.

Now, an AI intelligence system named GloVe is causing quite a commotion.

GloVe is perhaps slightly more subtle in its bigotry than Tay but equally offensive, according to researchers at Princeton University.

Using GloVe’s algorithm, those involved in the project conducted a word association test, whereby the AI system was asked to match particular words with other ‘pleasant’ or ‘unpleasant’ words.

‘White’ names such as Emily and Matt were paired by GloVe with ‘pleasant’ words containing positive connotations, while Ebony and Jamal - names more associated with the black community - were matched with ‘unpleasant’ words. As for gender, GloVe made some word associations based on traditional roles. Female terms were more likely to be paired with ‘family’ or ‘the arts’ while male terms were matched with ‘career’ or ‘maths’.

But here’s the catch: Although GloVe is “self-learning”, it gathers information by reading text and data from the internet - so its prejudice is basically picked up from us.

“Our results indicate that language itself contains recoverable and accurate imprints of our historic biases...machine learning absorbs prejudice as easily as other biases,” read the researchers’ report, which is awaiting publication.

“We show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language - the same sort of language humans are exposed to every day.”

Podcasts
0:00
25:26
0:00
14:40