Huawei has launched a new data platform that aims to handle the explosion of data from video and IoT devices, while removing the technical complexity of data infrastructure. The Chinese firm also teased an open source data virtualisation engine, OpenHetu, that’s due to be released in 2020.

Data virtualisation is an approach to data management that lets an application retrieve and manipulate data, while abstracting away the technical details of how that data is stored. OpenHetu will come with an open-source kernel, Huawei said, meaning developers can add “data source extensions and SQL execution policies, to allow fast interoperability and development.”

It did not initially provide further technical details.

The broader Huawei data stack unveiled at the Global Intelligent Data Infrastructure Forum in Shenzhen, aims to cover the complete data lifecycle from collection to processing. By converging data across four levels such as between storage systems and database or big data platforms and databases, Huawei claims it can cut total cost of operation for firms by 30 percent.

Hou Jinlong, president of Huawei cloud & AI products commented: “New technologies including 5G, AI, and cloud are transforming the way we live and work, but are generating huge volumes of data, bringing enormous pressure on the existing data infrastructure and making it increasingly difficult to efficiently locate, fetch, and utilize data.”

AI and HetuEngine

AI processes are a central part of the Huawei data infrastructure offering, which has split its AI architecture into three-layers: AI chipsets, storage and cloud that combine to facilitate cloud-based training and on premise inference. Huawei claims that its Ascend processors can improve cache pre-fetching hit rates by ‘automatically learning and identifying I/O flows’ and help firms knock 25 percent off the cost of their operation.

Using automated AI systems Huawei’s says it can predict disk faults 14 days in advances, as well as spotting performance bottlenecks 60 days in advance. Of course the system would need to trained on a company’s operational needs before it could predict bottlenecks or faults with any certainly.

See Also: The TSB IT Meltdown: 5 Key Takeaways from Slaughter and May’s Damning Report