An AI capable of reproduction was Dr Tyrell’s ultimate aspiration in Bladerunner 2049 – but now a neural network at Google has created its own ‘child’.

Researchers at Google have successfully designed an automated machine learning structure which puts together and trains an entirely new neural network. Known as ‘NASNet’, the programme automates the design of ML models, saving scientists months of neural network testing and refining. The results went on to be incorporated into Google’s large scale image classification and object recognition development.

Building on previous research in evolutionary and reinforcement learning algorithms, NASNet is capable of producing small neural networks which performed as well as those designed by humans. Google first announced its AutoML project back in May, and this breakthrough marks a major step forward in the use of advanced ML for image detection purposes.

Ne
The so-called AI ‘child’ consistently performed between 2-3% better in lab testing than neural networks put together by human scientists. With recent reports on AI innovation predicting between 20% and 30% of UK jobs will be replaced in the coming decades, perhaps data scientists should also be prepared for a career change!

Its prediction accuracy on the validation set is 82.7% – an improvement on all previous models the team had managed to produce. NASNet performed 1.2% better than its previously published results and about as well as the top unpublished result on arxiv.org, Google scientists noted on their research blog.


However, the AutoML process is far more laborious than the phrase ‘automation’ may suggest; in the early stages of network creation, scientists must analyse reems of neural network test results to ensure its controller neural net will effectively put feedback into practice. Once the ‘parent’ network is optimised to aim for an architecture output with a given probability, it can train a child network with the architecture to attain target accuracy.

AWS looks to plug skills gap with machine learning ‘boot camp’
Artificial intelligence spurs the reinvention of IT
Getting personal with AI – Ray Wang
GE, Nvidia look to make AI a reality in healthcare

A feedback loop ensures the gradient of the initial probability is scaled by the child network’s accuracy and input the results back to the controller. The individual networks exchange improved neural models thousands of times until the optimal accuracy is reached – not unlike father and son team David and Orion Hindawi, who founded cybersecurity firm Tanium.

In Learning Transferable Architectures for Scalable Image Recognition, Google applied AutoML to the ImageNet image classification and COCO object detection dataset. Given the gigantic size of the computer vision dataset — orders of magnitude larger than CIFAR-10 and Penn Treebank — engineers redesigned the search space to facilitate AutoML locating the best layer and then stacking it many times to create a final network. This experiment detected optimal layers for CIFAR-10 as well as ImageNet classification and COCO object detection. Scientists combined these two layers to produce the new architecture.