Facebook AI researchers were forced to disable a pair of artificial intelligence chatbots when they discovered that the two had created their own language, shutting out their human operators.
The new language that the chatbots had developed has been credited as a more efficient method of communication than the human form they were being taught.
This project was originally centred on the development of better communication between Facebook AI and humans, but the chatbots, Alice and Bob, exercised their own volition.
While there has been a great deal of debate regarding the terms AI and machine learning, this instance stands as an example of technology thinking for itself, and co-operating with a counterpart.
Although the shocked Facebook Artificial Intelligence Researchers (FAIR) were able to kill the activity by shutting the system down, this advanced level brings the fantasy of humans losing control of intelligent technology much closer to reality.
A FAIR blog spokesman said: “During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent… While the other agent could be a human, Fair used a fixed supervised model that was trained to imitate humans.”
“The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating,” said the blog post.
With AI among the front running tech trends set to revolutionise processes across all industries, this event experienced by the Facebook researchers is a reminder that if technologies are not tested thoroughly before implementation, major problems could arise.
READ MORE: Hackers could catch up with Segway MiniPro with IoT attacks
Internet of Things (IoT) for example was implemented rapidly, with millions of devices with insufficient security features having been pumped into society, the world is left with lasting weaknesses that could lead to devastating cyber attacks.