As Artificial Intelligence develops, it also changes the way it behaves depending on its environment according to AI experts, DeepMind.
To understand this change in behaviour, DeepMind scientists have been exploring how and why AI reacts differently in certain situations.
The company began to test how AI agents interact between themselves in what DeepMind refers to as a ‘social dilemma.’ This refers to a situation where the individual profits from selfishness unless everyone chooses the selfish alternative.
Using a game model as an example, DeepMind referred to Prisoner’s Dilemma, whereby two suspects are put in a situation to testify against the other in hope of being released. Researchers used this as a way to test how AI agents would react in this situation.
Read more: Google’s DeepMind AI masters lip-reading
The first example used was a Gathering gameplay that allowed the agents to play multiple times in order to learn how to behave rationally using multi-agent reinforcement learning.
It was found that when there were enough apples in the area, the agents naturally learned to coexist and collect as many apples as they can together. However, when the amount begins to reduce the agents learned to disrupt the other to focus on having enough time to collect what is left for themselves.
The second example, Wolfpack gameplay, revealed quite the opposite as the game requires close coordination to cooperate successfully. It was noticed that the agents learnt to alter their behaviour and become cooperative with other agents.
Read more: DeepMind AI gives robots ‘dreams’
In an interview with WIRED, Joel Z Leibo, Research scientist, DeepMind said: “At this point we are really looking at the fundamentals of agent cooperation as a scientific question, but with a view toward informing our multi-agent research going forward.
“However, longer-term this kind of research may help to better understand and control the behaviour of complex multi-agent systems such as the economy, traffic, and environmental challenges.”
DeepMind researchers found that depending on the environment and situation agents are put in their behaviour is dependent on the rules that are put in place. This means that if set rules are not put in place for the agents they will act questionably, but behaviour is changed to follow rules.