Google AI ethics council disintegrates over... lack of ethics?
An external 'ethics council' appointed by Google to oversee AI development is falling apart after less than a week, as its members endure the same scrutiny Google has turned on the rest of the world and are found morally wanting.
The most right-wing member of Google's eight-person Advanced Technology External Advisory Council is currently on the chopping block, but several names on the roster have been attacked from multiple angles since it was unveiled last Tuesday. Who'd have thought the company that had to stop telling its employees not to be evil would do so poorly at naming responsible overseers for the future of AI development? It's not like Google bought the company that was developing autonomous killer robots, or anything.
Also on rt.com Google lied about size of Pentagon AI contract, tried to hide project – leaked emailsA Google employee petition has surfaced demanding the removal of Kay Coles James, president of the infamous right-wing Heritage Foundation think tank and a vocal opponent of equal-rights laws for LGBT people, from the ATEAC, boasting 855 signatures as of Monday evening. Equal rights remains a sore spot for Google months after a massive employee walkout to protest the preferential treatment of senior executives accused of sexual harassment, but Coles James was seen by some as a concession to conservatives' complaints regarding the search engine's treatment of the political Right – and thus a lightning rod for the hostility with which Google treats that same Right. A post on Google's internal employee message boards said James "doesn't deserve a Google-legitimized platform, and certainly doesn't belong in any conversation about how Google tech should be applied to the world."
I'd like to share that I've declined the invitation to the ATEAC council. While I'm devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don't believe this is the right forum for me to engage in this important work. I thank (1/2)
— alessandro acquisti (@ssnstudy) March 30, 2019
On Saturday, another name bit the dust when behavioral economist and privacy researcher Alessandro Acquisti announced he was leaving the group, tweeting that he didn't think it was "the right forum for me to engage in this important work."
Also on rt.com 'Poisonous connection' of big tech: Google staff confer over anti-Trump search tweakProtests began almost immediately after Google unveiled the eight-member slate last Tuesday, when Dyan Gibbens, founder and CEO of drone manufacturer Trumbull Unmanned, was discovered among its luminaries, her company spun as "a veteran-founded startup focused on automation, data, and environmental resilience." Dabbling in drone tech was what forced Google to adopt AI principles in the first place, after 12 employees quit in protest over its Project Maven military drone sideline, drawing public attention the company couldn't deflect with a clever Google Doodle.
But Gibbens is still on board, and another member says the worst ethics on the council isn't even public knowledge. "Believe it or not, I know worse about one of the other people," University of Bath computer science professor Joanna Bryson gossiped on Twitter in response to a complaint about Coles James.
Does everyone know that Joanna Bryson wrote a paper titled "Robots should be slaves" where she advocates for the reinstitution of slavery as a policy goal? This is what the top tier of AI ethics looks like. https://t.co/DzvF1HYg4G
— eripsa (@eripsa) March 27, 2019
Still others have called for Bryson's removal over a paper she wrote suggesting AI could be used as slaves.
The ATEAC is tasked with examining the ethical problems inherent in AI development, such as "facial recognition and fairness in machine learning," according to Google VP of global affairs Kent Walker. Made up of "philosophers, engineers, and policy experts," it was supposed to do what Google couldn't – the right thing, morally speaking.
The company's AI principles include a mandate to "be socially beneficial," to "incorporate privacy design principles," and to "avoid creating or reinforcing unfair bias." Anyone familiar with Google's history would not be surprised when an eight-person panel of mere humans proved incapable of bringing the company in line with these principles, which Google appears to violate on a daily basis.
Also on rt.com Google staff mulled burying conservative media deep in ‘legitimate news’, leaked exchanges revealIt's unsurprising, then, that an ethics panel appointed by Google can't keep from morally cannibalizing itself for more than a week. The search giant has fallen a long way since the halcyon days of "don't be evil," the cutesy motto it adopted shortly after incorporating in 2000. That tagline was quietly removed from Google's policy handbooks in 2015, when the company was absorbed into parent Alphabet Inc., but even before they made it official, Google was proving their moral relativism by fudging the boundaries of what users might consider "evil."
"Employees of Alphabet and its subsidiaries and controlled affiliates should do the right thing—follow the law, act honorably, and treat each other with respect," the current handbook reads, noticeably leaving Alphabet employees with lots of wiggle room regarding how they treat customers. Google's DeepMind subsidiary arguably violated customers' privacy when it was merged into mainline Google after slurping up their health data, for example, with critics accusing the company of breaking its promise that health information would never be connected to Google accounts. Even trying to get a straightforward web search out of Google is now an uphill battle in many cases, with ideologically-skewed results shielding users from politically-sensitive facts.
Also on rt.com Google caught censoring search results on abortions, Maxine Waters & more – reportYet somehow they can't resist rubbing it in the faces of those who have no choice but to use their effective monopoly. From the leaked documents outlining Google's self-appointed role as "the Good Censor," insulating (and insulting) users from the rough edges of reality, to their determination to control the future development of AI (the whole point of this ethics council in the first place), Google has proven their definition of ethics is quite a bit different from the rest of ours.
Helen Buyniski
Think your friends would be interested? Share this story!
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.