icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
12 Jan, 2021 04:49

Stanford scientist’s study says AI can tell by examining your FACE what your politics are

Stanford scientist’s study says AI can tell by examining your FACE what your politics are

Facial recognition algorithms can be trained to recognize people’s political views, Stanford-affiliated researcher Michal Kosinski claims, stating that his most recent study achieved 72 percent accuracy on liberal v conservative.

Properly trained facial recognition algorithms can correctly guess a person’s political orientation nearly three-quarters of the time, Kosinski said in a paper published on Monday in Scientific Reports. Using over a million profiles from Facebook and dating sites across the US, UK, and Canada, the algorithm he tested was able to accurately pick out conservatives from liberals in 72 percent of face pairs.

The figure may not seem high, but keep in mind that a random pick would give a 50 percent accuracy, while a human trying to determine political affiliation from a person’s appearance would achieve only about 55 percent accuracy. And even when obvious features like age and race, which correlate with political views, were adjusted for, the facial recognition software remained around 70 percent accurate, according to the study.

As is typical with AI, there is no telling which features exactly the algorithm has picked to make the predictions. The authors made an educated guess that head orientation and emotional expressions were among the more telling cues. Liberals were more likely to look directly at the camera and more likely to look surprised than disgusted, for example. Beards and spectacles, on the other hand, barely affect the accuracy of predictions.

Also on rt.com Portland passes unprecedented ban on facial recognition tech, despite $24,000 Amazon lobbying effort to kill initiative

The conclusions of the study go much further, diving deep into the realms of facial-recognition dystopia. Inhumanly accurate algorithms, paired with publicly available images of millions of people, may be used to screen people – without their consent, or even knowledge – based on various criteria that humans consider part of their private lives. Kosinski’s earlier study used this approach to predict sexual orientation, and he believes the same technology may bring with it a truly nightmarish future.

One does not need to go far for an example. Faception, a ‘Minority Report’-esque Israeli program, purports to predict not only an individual’s place on the political spectrum, but also that person’s likelihood of being a terrorist, paedophile, or other major criminal. Kosinski’s work has won him a degree of infamy in the past; Faception’s developer counts Kosinski as one of the people they approached for consultations on the product, but he says he merely told them he had qualms with the ethics of it.

Kosinski’s work remains controversial. The 2017 ‘algorithmic gaydar’ study, for example, was hacked by LGBT advocacy groups across the US unhappy with the ramifications. The science behind it was criticized by some other researchers of AI and psychology, who said he conflated facial features with cultural cues, but they didn’t dispute his point about the dangers of mass surveillance.

Others see such studies as nothing but quackery, considering that it bears a striking resemblance to the notorious pseudoscience of physiognomy. Adherents claimed they could assess an individual’s character, personality, even criminal propensities by the shape of their face – but in practice their predictions revealed more about their biases. The researcher also denounced the discipline as “based on unscientific studies, superstition, anecdotal evidence, and racist pseudo-theories,” but he insists the AI-powered approach works.

Kosinski’s name is sometimes mentioned in connection with Cambridge Analytica, the now-defunct company that mass-harvested data from Facebook and claimed that it could use it to conduct highly targeted political campaigns. The connection actually never existed and seems to stem from “inaccurate press reports” from the time when the scandal over the firm’s dubious business model first erupted in 2018.

Editorial note: This story has been changed by RT since its first publication to better reflect the stated goals of Michal Kosinski’s research into facial recognition applications, and to correctly state that he was not connected to Cambridge Analytica.

Podcasts
0:00
13:2
0:00
15:45