At this week’s annual Open Compute Project Summit (OCP), many announcements are being made in the world of data centres from big companies such as Microsoft, Facebook and more.

However, this year Microsoft seems to have taken the lead with various partnerships with different companies to boost its open source innovations in the data centre.

The Open Compute Project is a foundation of various companies, who together share designs of data centre products to deploy advanced innovation.

The mission of the organisation is to design and boost the delivery of the most efficient server, storage and data centre hardware designs for scalable computing.

CBR lists five of the biggest announcements made at this year’s show by Microsoft.

Microsoft & AMD

To kick start its long list of partnerships, Microsoft announced its collaboration with AMD, which is dedicated to delivering open source cloud hardware combining AMD’s “Naples” processor together with Microsoft’s Project Olympus.

The very recent Project Olympus is an open source server hardware design, which was launched in October 2016. 

Read more: Microsoft and AMD partner to deliver open source cloud hardware

The company’s collaboration with the Open Compute Project is what brought about the release of the hardware development model.

Due to the influence the latest server had on the rest of the data centre industry, a partnership with AMD was a call to action.

Project Olympus combined with AMD’s “Naples” processor is designed to enable the updated cloud hardware design to adapt in order to meet the application demands of global data centre customers.

This is particularly significant as it accelerates hardware innovation back to the data centre, and most especially with the partnership being across an open source community, it deploys an easy-to-market selection of open source designs, while sparking a significant challenge to Intel’s dominance of the data centre industry.

Microsoft & Qualcomm

Microsoft partners with Qualcomm in plans to deliver ARM processors to data centre servers which run the Windows operating system.

This is centered around Qualcomm’s 10 nanometre Centriq 2400 platform sever solution, which will power Microsoft’s Azure cloud platform.

Aside from this, the companies also plan to deliver multiple generations of hardware, software and systems.

Already, Microsoft has already begun running Windows server on ARM, but currently only for internal use.

The specifications for the server are based on Microsoft’s Project Olympus, and have been aided by being familiar with Qualcomm due to previous work that the two companies have done together on ARM-based server enablement.

Following this announcement, Qualcomm also confirmed that it has now formerly joined the Open Compute Project Foundation.

The announcement is particularly significant as the ARM processors are said to be specifically committed to the project.

What about Schneider Electric and Nvidia?

Microsoft & Schneider Electric

Microsoft also took to developing the design of rack power distribution solutions with a joint partnership with Schneider Electric. The two will work towards the design of a Universal Rack Power Distribution Unit (UPDU).

The UPDU will be designed to simplify rack power system procurement, inventory management and deployment processes for data centre operators managing large, hyperscale and colocation data centres.

Its partnership with Microsoft is said to have enabled Schneider Electric to deliver an innovative solution that overcomes all real issues that are noticed across large scale global deployments.

This follows from the company’s plan, which it made last year, to make more informed decisions around Open Compute data centres. By choosing Microsoft as a partner to deliver this solution it not only innovates data centre technologies but also takes the data centre industry forward as a whole.

The new UPDU’s flexible input connector enables data centre operators to use a single power distribution unit across their rack system architecture, eliminating the need to employ an array of remote power distribution units. This is significant as it removes the need to use a wide selection of power units, deploying a system for less power use in the data centre.

Microsoft & Cavium

In connection with Cloud services, Microsoft confirmed it will be collaborating with Cavium, provider of semiconductor products to accelerate a variety of cloud workloads on Cavium’s ThunderX2 ARMv8-A data centre processor for the Microsoft Azure cloud platform.

The two companies will also be working together to deliver web services on a version of Windows server developed for Microsoft’s internal use running cloud services workloads on ThunderX2.

Again, the server platform is based on Microsoft’s Project Olympus, which is tested and is fully complaint with Cavium’s hardware platform

This is significant as traditional ARM-based servers have come a long way since being deployed across first generation data centres.

Cavium said that it expects the second generation products to help accelerate ARM server deployment across a mainstream set of volume applications, which together with the help of Microsoft, will boost commercial deployments for data centres and cloud providers.

 

Microsoft & Nvidia

Finally, Microsoft accelerated the innovation of cloud with the addition of Artificial Intelligence in a partnership with Nvidia to boost AI Cloud Computing.

The two companies have come together to provide hyperscale data centres with a fast and flexible path for AI deployment with the new HGX-1 hyperscale GPU accelerator, which is an open source design connected with Microsoft’s Project Olympus.

The new design is to be delivered to meet the accelerating demand for AI computing in the cloud across various fields, and for the many enterprises and start-ups which are currently investing in AI and adopting AI-based approaches- the HGX-1 architecture provides performance in the cloud.

Providing up to 100x faster deep learning performance, it offers significant flexibility to work with data centres around the world.

Particularly for hyperscale data centres, it is significant as it is aimed at offering a quick and simple path to be ready for AI innovation.