The rhetoric surrounding AI and robots have some believing that we are nearing the ability to introduce something like Joi, the AI hologram from Blade Runner 2049. While in fact this kind of advancement remains in the realms of fiction, the AI Index Annual Report 2017 shows that AI is fighting to level the playing field in the battle of humans versus machines.
With Artificial Intelligence technologies being developed in a wide range of applications, the AI Index revealed several surprising insights on where humans stand in the robot vs biological brain race. While robots easily outperform regular employees in certain visual tasks, natural language processing is not yet superior to human capability.
Scientist reached a major breakthrough this year, with tests revealing that the best Artificial Intelligence system recognised speech from phone call audio at 95% – neck-and-neck with human ability. AI tech has made significant progress in the Switchboard HUB5’00 metric, improving steadily from 84% in 2011.
Progress in machine natural language understanding (NLU) is nearing perfection when it comes to parsing sentences of all lengths. In 2012, AI could determine syntactic structure of sentences with fewer than 40 words to 93% accuracy. However, in 2017, parsing performance of any length sentence is approaching 95%.
Professionals working with text need not worry about losing their job to a machine just yet, as AI is not able to find the answer to a question within a document better than a biological person. AI accuracy on this task shot up between August 2015 and summer 2016, rising from 60% to 72% during that time. Humans push just ahead with 82% accuracy, a figure unchanged between 2015 and 2017, though the gap narrowed as machine capability approached 80% in November 2017.
AI robots have been able to outperform humans in object detection using the Large Scale Visual Recognition Challenge (LSVRC) since 2015, with the tech nearing 98% accuracy by mid-2016, which is 3% superior to human performance. Researchers found error rates for image labelling have fallen 2.5% from 28.5% to below 2.5% since 2010. However, when it comes to visual question answering, AI progress has more or less flat-lined between mid-2016 and June 2017. The best AI system is around 67% accurate compared with 83% accuracy by a human candidate.
Despite the somewhat remarkable advances made in Artificial Intelligence programming this year, the machines in question would likely perform much worse on a task modified even slightly. By contrast, human brains are far superior because of neural ability to cross-reference knowledge sets and make creative links. A programme which could read Chinese characters would fail to derive cultural understanding from NLU, whereas the average human mind would inevitably make such inferences.
A persisting downfall of machine learning is an inability to generalise – unlike the fictional Joi in Blade Runner. While games such as chess and Go take place in controlled experimental environments, the AI robot is hardly able to read its opponent’s body language to infer anything at all — unlike the most successful poker players. That being said, the Libratus AI built by two Carnegie Mellon scientists defeated four of the best no-limit Texas Hold ‘Em players in January 2017. So it depends what the goal is.
The Index project includes experts from SRI, Stanford, Massachusetts Institute of Technology. The researchers note limitations on their analysis could be that data sources are mainly from the US and that their report does not include information regarding R&D investment from governments and corporations. These are “deeply important”, and researchers said they intend to broach these subjects in future reports.