icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
1 Dec, 2021 13:55

Will AI turn humans into 'waste product'?

Will AI turn humans into 'waste product'?

A tech guru warns that robots that think for themselves may take over the world and use weapons of mass destruction to wipe out mankind. Is it time to worry?

This year, the BBC’s prestigious Reith Lectures will be delivered for the first time by a computer scientist. UK-born Stuart Russell, professor of computer science at the University of California, Berkeley, will look at ‘Living with Artificial Intelligence’ in a series of weekly broadcasts during December.

In a trailer for the lecture series, Russell was interviewed on BBC Radio 4’s Today on Monday. As confirmation of the old journalistic adage that “if it bleeds, it leads,” the conversation was dominated by gloomy prognostications about what AI might be doing to our society and even more nightmarish possibilities for the future. Never mind developing machines that can learn – we need to learn from the history of new technologies to treat both hype and horror with equal scepticism.

Artificial intelligence is already in use in society. Computers can guess what we would like to watch next on YouTube, what products we might want to buy on Amazon and show us adverts based on previous internet searches on Google. More usefully, perhaps, machines can learn to identify cancerous growths on medical scans with great speed and accuracy and flag up potentially fraudulent financial transactions – something very useful when banks and other institutions perform astonishing volumes of trades constantly.

Russell believes that AI is “not working necessarily to our benefit and the revelations we’ve seen recently from Facebook suggest media companies know it is ripping societies apart. These are very simple algorithms, so the question I’ll be asking in the lectures is what happens when the algorithms become much more intelligent than they are right now.”

This is an odd way of looking at things – that algorithms rather than human politics are the problem in society right now. Of course, dumb algorithms that push social-media posts to you on the basis that “if you liked that one, you might like this one,” probably don't help in getting people out of their “echo chambers.” But people sticking with their own ‘tribe’ when it comes to politics is mostly about personal choice and unwillingness to accept that people with a different view might have a point, not the work of evil computer algorithms. 

Where Russell is really concerned is when AI goes beyond task-specific applications to the possibility of general-purpose AI. Instead of setting computers up to do particular things – like churning through vast amounts of data with a particular goal and learning how to do it better and faster than humans – general-purpose AI systems would be able to take on a wide variety of tasks and make decisions for themselves.

In particular, Russell worries about autonomous weapons that “can find targets, decide which targets to attack and then go ahead and attack them, all without any human being in the loop.” He fears that these AI WMD could destroy whole cities or regions, or take out an entire ethnic group.

Russell cooperated on a startling and scary Black Mirror-style film, Slaughterbots, in 2017, showing one particularly gloomy vision of tiny, bee-like drones selecting and assassinating anyone who dares to disagree with the authorities. 

But while some degree of learning and autonomy is in use already – for example, to take humans out of the dangerous business of clearing minefields – the combination of recognising individuals or groups accurately and making decisions about who and how to attack are way beyond current capabilities. As a US drone strike in Afghanistan in August – which killed 10 people, including seven children – showed, it's possible to have hi-tech, intelligence-led attacks that go horribly wrong. Moreover, if political and military leaders have few qualms about killing the innocent, why wait for fantasy AI-powered autonomous weapons when you can just carpet-bomb whole areas, whether it is Dresden in the Second World War or Cambodia in the Seventies?

The all-conquering power of AI is, as things stand, just hype. Take driverless cars. Just a few years ago, they were the Next Big Thing. Google, Apple, Tesla and more poured billions into trying to develop them. Now they’re on the back burner because the difficulties are just too great. A year ago, Uber – once dreaming of fleets of robotaxis – sold off its autonomous vehicles division. As for robots and AI taking over our jobs, at best they will be a tool to improve the productivity of humans. Using computers to do bits of our jobs could be useful, but actually replacing teachers, lawyers or drivers is a whole different ball-game.

Silicon Valley seems to have a schizophrenic attitude to its own technology. On the one hand, the importance of artificial intelligence is exaggerated. On the other hand, we have doom-mongering speculations about AI systems gradually taking control of society, leaving human beings, in Russell’s words, as so much “waste product.” In truth, AI keeps confirming that it is both extremely useful for doing specific tasks and also pretty dumb at anything beyond that. 

According to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, we suffer from multiple misunderstandings about AI. First, specific AI and general AI are completely different levels of difficulty. For example, getting computers to translate different languages has involved an enormous amount of work but text- and voice-based systems are getting pretty good. Getting two AI machines to hold a conversation, on the other hand, is much harder. 

Second, many things that humans find easy are really difficult to automate. For example, we’re evolved to scan the world quickly, pick out distinct things and figure out what is important right now. Computers find this extremely hard. Third, humans have a rich experience of the physical world through our senses that, researchers are finding, has a significant impact on how we think. Fourth, Mitchell argues, human beings develop common sense, built on experience and practice. AI systems can chuck ever-greater amounts of processing power at problems, but struggle to replicate that. Elon Musk failed in his attempt to fully automate his Tesla factories – humans were simply irreplaceable for some tasks.

If we could cut out the boosterism about AI, we could see a useful group of technologies that can help us out in specific ways to make our lives easier. Equally, it would burst the bubble of all those catastrophists who think AI systems will take over the world. Ultimately, we’re still in control of the machines and they’re not about to replace us any time soon. With a bit of historical perspective, we can see that the fretting about AI is just the latest in a seemingly endless series of fearful spasms about new technology. 

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

Podcasts
0:00
26:12
0:00
29:12