Sign up for our newsletter
Technology / AI and automation

PayPerCloud upgrades its network with Brocade Ethernet fabric

Cloud hosting provider PayPerCloud has replaced its network switches infrastructure with a Brocade Ethernet fabric to deliver enhanced performance, scalability, design, installation, service and support.

The Brocade Ethernet fabric includes Brocade VDX 6720 Switches and MLX Series routers.

Brocade VDX 6720 switches will deliver 10 GbE top-of-rack server connectivity and the Brocade MLX-16 routers in a Multi-Chassis Trunking (MCT) configuration will deliver network-level virtualisation and enhanced network reliability.

The Brocade network infrastructure allows PayPerCloud to support new customers on the bandwidth-intensive backup service as it continues to grow.

White papers from our partners

It also helps PayPerCloud to increase its server-rack density from eight servers per rack to 12 and expects to reduce power consumption by approximately 60%.

PayPerCloud automates a previously manual process by leveraging Automatic Migration of Port Profiles (AMPP) with Brocade Network Advisor, a graphic user interface (GUI) based network management tool.

The Brocade Automatic Migration of Port Profiles (AMPP) feature ensures that a consistent port profile is applied to all VDX switches in the fabric, while Brocade Network Advisor gives PayPerCloud an end-to-end visibility through an integrated interface that manages the Brocade devices.

In the PayPerCloud environment, Brocade 1860 Fabric Adapter cards installed in Dell servers enable 10 GbE connectivity to the Microsoft Hyper-V hosts and security systems.

PayPerCloud president Miles Feinberg said with the new network upgrade, they went from eight- to nine-millisecond latency to the Internet backbone to sub-millisecond latency.

‘Our rack-to-rack latency is in the nanoseconds, whereas before it was three to four milliseconds," said Feinberg.

"Today, the Brocade architecture allows us to support 72 servers per row and 540 possible VMs in a rack. If we need more 10 GbE capacity, we can scale out easily."
This article is from the CBROnline archive: some formatting and images may not be present.

CBR Staff Writer

CBR Online legacy content.