‘Godfather of AI’ issues warning and quits Google

1 May, 2023 18:44 / Updated 2 years ago
Geoffrey Hinton says ‘bad actors’ could harness artificial intelligence for ‘bad things’

Turing Award-winning scientist Geoffrey Hinton resigned from Google last month, where he had spent much of the past decade developing generative artificial-intelligence (AI) programs, and warned of the risks that his life’s work may present to humanity.

Hinton is credited with being a foundational figure in the advent of Artificial Intelligence (AI), but told the New York Times in a lengthy interview published on Monday that he took the decision to exit amid a de-facto arms race in Silicon Valley between Google and Microsoft.

The controversial technology has formed the basis for generative AI software such as ChatGPT and Google Bard, as tech-sector giants dip their toes into a new scientific frontier, one they expect to form the basis of their companies’ futures.

Hinton’s told the NYT his motivation for leaving Google was so he could speak without oversight about technology that he now views as posing a danger to mankind. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the US newspaper.

Public-facing chatbots such as ChatGPT have provided a glimpse into Hinton’s concern. While they are viewed by some as just more internet novelties, others have warned of the potential ramifications as they relate to the spread of online misinformation, and of their impact on employment.

The latest version of ChatGPT, released in March by San Francisco’s OpenAI, prompted the publication of an open letter signed by more than 1,000 tech-sector leaders – including Elon Musk – to highlight the “profound risks to society and humanity” that the technology poses.

And while Hinton didn’t add his signature to the letter, his stance on the potential misuse of AI is clear: “It’s hard to see how you can prevent the bad actors from using it for bad things.”

Hinton maintains that Google has acted “very responsibly” in its stewardship of artificial intelligence but eventually, he says, the technology’s proprietors might inevitably lose control. This could lead to a scenario, he says, where false information, photos and videos are indeterminable from real information, and lead to people not knowing “what is true anymore.” 

“The idea that this stuff could actually get smarter than people – a few people believed that,” Hinton told the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”