icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
10 Feb, 2017 07:36

Ditch humans or cooperate? Google’s DeepMind tests ultimate AI choice with game theory

Ditch humans or cooperate? Google’s DeepMind tests ultimate AI choice with game theory

DeepMind, the London-based artificial intelligence unit of Google’s parent Alphabet Inc. has been running a series of simulations aimed at answering a key AI question once and for all: will the robots play nice, or will they try and kill us all?

DeepMind’s latest research is focused on the dichotomy between cooperation and competition, specifically among reward-optimized agents (human or synthetic), in highly variable environments.

While far from deciding humanity’s fate at this point, the information gathered thus far gives us an indication of the extent to which man and machine may cooperate in the near future, on everything from transportation systems to economics.

The team is trying to expand the comfort zone of existing AI agents in a variety of ways, most recently through two distinct game types that draw heavily on elements from game theory.

In the first game, the two agents must compete to gather as many apples as possible, a straightforward premise centered around scarcity and cooperation. The more plentiful the apples, the more likely the players were to cooperate or, at least, leave the other alone.

However, there is a twist: both players are armed with a ray gun and can stun the other player at any time, immobilizing them for a brief period, allowing the aggressor to gather more resources unimpeded. This is classified as a ‘complex behavior’ within the game, as it requires more computing power, thought, or effort to carry out, as opposed to a singular directive such as a collecting apples.

The DeepMind team found that the greater the level of intelligence applied (or larger the neural network supporting the software agent), the more aggressive the software agents became.

The second game, the Wolfpack game, involves hunting for prey for a reward. The twist here is that other wolves in the surrounding area also receive a reward for a successful hunt. The more wolves within the designated area, the greater the reward each wolf receives.

This game rewarded cooperation (the complex behavior in this instance) far more than the apples game, regardless of how intelligent the participants were.

The researchers believe that there is a propensity towards the more complex behaviour in each game, especially as agents become more intelligent i.e. aiming at and zapping an opponent and cooperating for greater rewards in each game.

Leibo emphasized that in the current round of experiments, none of the software agents had a functioning short-term memory, and thus could not make inferences on other subjects’ behavior based on past experience.

“Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals,” he said.

Dear readers! Thank you for your vibrant engagement with our content and for sharing your points of view. Please note that we have switched to a new commenting system. To leave comments, you will need to register. We are working on some adjustments so if you have questions or suggestions feel free to send them to feedback@rttv.ru. Please check our commenting policy. Happy holidays to you all! Question More
Podcasts
0:00
28:26
0:00
25:13