Google’s DeepMind pits AI against AI to see if they fight or cooperate!

Google’s AI subsidiary DeepMind has been exploring this problem in a new research. The company’s researchers decided to test how AI agents interacted with one another in a series of events.



The most famous example of this is the prisoner example, where two individuals can choose to betray one another for a prize, but lose out if both choose this option.

In the first game, Gathering, two player have to collect apples from a central pile. They have the option of “tagging” the other player with a laser beam, temporarily removing them from the game, and giving the first player a chance to collect more apples. In the second game, two players have to hunt a third in an environment filled with obstacles. Points are claimed not just by the player that captures the prey, but by all players near to the prey when it’s captured.

What the researchers found was interesting, but perhaps not surprising: the AI agents altered their behavior, becoming more cooperative or antagonistic, depending on the context. That means part of the challenge in controlling AI agents in the future, will be making sure the right rules are in place.

No comments:

Powered by Blogger.