Artificial intelligence could unlock the Internet of Things (IoT) potential every CIO has dreamed of. Yet AI requires substantial compute power, so how close can AI get to the edge?
IoT technologies are frequently criticised for being nascent, insecure, pointless or causing uncertainty around ROI. On the other hand, well-advised integration of IoT devices within an industry-standard architecture can improve efficiency, bring costs down, and reduce TCO infrastructure in the long-term.
Internet of Things devices will inevitably be “at the edge” of the central compute platform (eg the Cloud) where data is stored, retrieved and processed. Yet AI could help distribute the workload so that edge devices could manage speedy computation at a distance from the data centre. This model has tremendous potential to improve efficiency, safety and time-to-decision for industries such as agriculture, space exploration, marine facilities, leisure and enterprise.
Moreover, hunger among business leaders for complex and more powerful IoT is growing. Particularly in sectors such as oil and gas extraction, predictive maintenance could save millions in lost revenue caused by a fault. Factor in limited global cloud capacity, and a pressing demand for compute capability at the edge becomes clear.
Unfortunately, a shortage of edge data storage and inadequate chips mar enterprise innovation along this route. While quantum computing could one day solve processing woes, the current cost of such machines are astronomical, and there is a lot more work still to be done to bring this solution to market.
How can businesses adapt? For years now, smart phone manufacturers have led the way on ramping up compute power in edge devices. Soon after, the automobile industry caught on, with General Motors, Daimler AG, Volkswagen and Hyundai partnering with web software companies such as Waymo (Alphabet), Cisco, Blackberry and Toshiba.
Alan Mindlin, Technical Manager at Morey, advises companies to consider exactly which kinds of applications they wish to run “at the edge” to estimate capacity needed, or IT decision makers risk a budget bonfire. “Trade-offs relate to the number of nodes, the amount of data each node needs to share, the interval that it must share it within, and so on,” Mindlin told CBR.
Tom Fisher, CTO of MapR said edge computing works best in practise when it “is augmenting cloud computing”. However, Fisher warns “you need to run a data platform because the continued explosion and distribution of data can make knowing where data is and what it represents impossible.”
Another crucial consideration is that multiple points of network entry multiply data security weaknesses – but cyber protection companies have their own plans with AI.
On an enterprise level, ARM, Intel and NVIDIA are just three big firms working on more robust artificial intelligence solutions for edge computing. HPE and Microsoft are also breaking ground in enterprise IoT offerings.
“Edge analytics and IoT are enabling technologies that bring the power of IoT to a growing number of industries, including areas where there is low connectivity or where power is an issue,” said David Schatsky, a managing director at Deloitte on an HPE blog.
HPE is also developing a neural network accelerator chip, according to a Register tip in late November. The multinational’s dot product engine (DPE) is reportedly capable of carrying out high speed AI computations. Thus far HPE’s best edge offering consists of the Edgeline Converged Edge System (up to 64 Xeon cores) the GL20 IoT Gateway for “data aggregation and higher compute capabilities”.
At Microsoft Connect in November, the software tycoon showcased a preview of its Azure IoT Edge computing package, containing infrastructure and modules to build IoT gateway programmes. Remarkably, the software giant said it has created a range of AI-based services to help companies extract useful analytics at the edge as well as assisting with integration of machine learning into corporate software.
“Where connectivity can be expensive or unreliable, having IoT devices that can do local processing outside of the cloud is a big advantage under these conditions,” said Frank Shaw, VP Corporate Communications, Microsoft told TechRepublic, citing rapid anomaly detection by edge IoT nodes.
Internet of Things: 9 aspects of fog computing every CIO should know
On the component side, Intel launched its Movidius Myriad X “vision processing unit” (VPU) in August, containing a Neural Compute Engine which the firm says can deliver “low-power, high-performance” AI processing at the edge. According to Intel, the Myriad X is ten times faster than its predecessor, the Myriad 2, released just weeks earlier. Myriad X’s deep learning interface could prove one of the core components of visual processing for autonomous enterprise IoT devices of the future, including drones and IoT cameras.
However, Intel’s reputation for cybersecurity has taken a sledgehammering this winter over Spectre and Meltdown vulnerabilities. These incidents are unlikely to bolster already-flagging confidence in Internet of Things platform security.
Intel announced its testing of a neuromorphic AI chip called Loihi at CES 2018 at the end of September. The firm has invested more than $1bn in AI firms, CEO Brian Krzanich told ZDNet. But competitors are not far behind, as the AI-optimized Jetson TX2 from leading GPU manufacturer NVIDIA has proven popular since its release in March.
Current models of computer science are reaching their limits and a fundamental change in CPU/GPU technologies must take place before enterprise can benefit from edge computing. Moore’s Law improvements will soon cease to cut the mustard. Alternatives to current silicon configurations are needed. Meantime, Google’s TPU chips and the potential for new neuromorphic chips could see business edge computing take off within the next five years.