View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
February 10, 2017updated 13 Feb 2017 10:33am

DeepMind games test AI aggression and cooperation behaviours

Will AI cheat if it's not told it shouldn't?

By Hannah Williams

As Artificial Intelligence develops, it also changes the way it behaves depending on its environment according to AI experts, DeepMind.

To understand this change in behaviour, DeepMind scientists have been exploring how and why AI reacts differently in certain situations.

The company began to test how AI agents interact between themselves in what DeepMind refers to as a ‘social dilemma.’ This refers to a situation where the individual profits from selfishness unless everyone chooses the selfish alternative.

Using a game model as an example, DeepMind referred to Prisoner’s Dilemma, whereby two suspects are put in a situation to testify against the other in hope of being released. Researchers used this as a way to test how AI agents would react in this situation.

Read more: Google’s DeepMind AI masters lip-reading

The first example used was a Gathering gameplay that allowed the agents to play multiple times in order to learn how to behave rationally using multi-agent reinforcement learning.

It was found that when there were enough apples in the area, the agents naturally learned to coexist and collect as many apples as they can together. However, when the amount begins to reduce the agents learned to disrupt the other to focus on having enough time to collect what is left for themselves.

The second example, Wolfpack gameplay, revealed quite the opposite as the game requires close coordination to cooperate successfully. It was noticed that the agents learnt to alter their behaviour and become cooperative with other agents.

Content from our partners
How to turn the evidence hackers leave behind against them
Why food manufacturers must pursue greater visibility and agility
How to define an empowered chief data officer
Read more: DeepMind AI gives robots ‘dreams’

In an interview with WIRED, Joel Z Leibo, Research scientist, DeepMind said: “At this point we are really looking at the fundamentals of agent cooperation as a scientific question, but with a view toward informing our multi-agent research going forward.

“However, longer-term this kind of research may help to better understand and control the behaviour of complex multi-agent systems such as the economy, traffic, and environmental challenges.”

DeepMind researchers found that depending on the environment and situation agents are put in their behaviour is dependent on the rules that are put in place. This means that if set rules are not put in place for the agents they will act questionably, but behaviour is changed to follow rules.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU