As excitement builds around AI, a discovery has cast its readiness for deployment into question as researchers have found image recognition driven by the technology to be deeply flawed.
Testing has revealed that systems implemented with AI image recognition can be tricked by altering just a single. This casts a long shadow of doubt over AI in cybersecurity, a space in which AI is being eagerly pursued.
The research has been conducted and presented by Su Jiawei and a team of colleages from Kyushu University. They found that a single pixel change can convince a computer that a taxi is a dog, or a turtle a rifle, as reported by the BBC.
This failure is rooted in neural networks behind the technology becoming confused, with the researchers finding that changing one pixel in around 74 per cent of images was enough to disrupt the process.
Mr Jiawei, speaking to the BBC, said: “As far as we know, there is no data-set or network that is much more robust than others… More and more real-world systems are starting to incorporate neural networks, and it’s a big concern that these systems may be possible to subvert or attack using adversarial examples.”
The point made by Mr Jiawei is highly accurate, as the likes of industry leaders, Salesforce and Google, have both recently made major moves to enhance their neural network capabilities. Innovation has previously outpaced security testing and awareness, with a potential industrial IoT crisis looming due to vendors flooding the market with unsecure connected devices.
he discovery made during the research raises the question of whether the world is charging headlong toward another potential security crisis caused by AI and neural network weaknesses.
With AI development moving at an all-time high speed, organisations are investing and are primed and ready to bring the technology on board, many are deeply immersed in its introduction. Numerous tech industry giants such as IBM and Microsoft are among organisations leading the way, adding to inflated expectations of security.