Some people equate today’s cloud environment with yesteryear’s mainframe era.

The argument goes like this: If I’m just buying time in someone’s cloud and that cloud is hosted from one or several mega data centres dotted hundreds or thousands of miles away then it is not so different from buying time on the mainframe inside a secure IT room which is something I did 30 or 40 years ago. I’m simply buying time and service across a network built to distribute data from a centre to the edge.

But key differences exist for the requirements of today’s networked workloads and the coming IOT based data Tsunami.

Greater volumes and more than one direction
The first is network volume. The networks in use today where not built to cope with the volumes and variety of data being pushed from the centre. Think about video or graphics heavy files such as data visualisation tools. With this type of workload the priority becomes the user experience. The rise of content delivery networks provides platforms to solve content latency issues and cut jitter. Firms such as Akamai emerged to overcome steaming HD (and soon 4K) video and other fat files to a range of devices across fixed and mobile networks.But increasingly firms also require more local data processing and storage to be as close to the user as possible.

The vast data volumes being pushed around the network is one strain but what’s coming has the potential to cause even more performance issues.

The issue to overcome is direction and the continued reliance on networks that were not built for today’s multi-directional communications.

The internet of things, the industrial internet of things, machine to machine communications, sensor based smart technology. Each of these technologies sits under IOT.

In a future built on an IOT it is the end points which will not only consume data but which will become data generators. Think of a smart device in a home or office environment, think of a sensor in a car or building and the data it will generate. Think of machine to machine data.

Once one considers these new data sources one must consider the data platform architecture on which they are built.

The notion that all these additonal Exabytes of data will be sent across a network to mega scale multi-mega-watt data centres for processing and storing is erroneous.

Data architectures will have to change but data infrastructure will change first. It is by putting micro, mini and modular data centres out in the branch environment or regional office or mobile network base station that will serve this new world of high volume multi directional data transfer.

So who is taking this approach?
Arun Shenoy, leads the UK and Ireland IT business at Schneider Electric (SE). The IT business is responsible for large range of infrastructure products such as UPSs (uninterruptable power supplies, racks, in row cooling, PDUs (power distribution units) and software such as DCIM (Data Centre Infrastructure Management).

The IT business is also responsible globally as the lead business for Schneider’s data centre business and can take traditional IT business products and put them into non IT applications.

In the data centre space SE is targeting all the firms who offer data centre infrastructure or data centres in some form of service. That is everyone who is a colo, the cloud providers and the service providers such as Rackspace. And it includes people in financial services, government and manufacturing who own and operate their own infrastructure for their own purposes.

What these players have in common is an increasing need for edge computing. The days of having everything centralised in a wholly owned on premises environment are long gone.

Shenoy says compliance causes regulation to force different organisations to behave in a different way depending on your risk profile and what you are compelled to do by law. Financial Services firms are long standing occupiers of colo and we can see the trend to keeping core IT in their own infrastructure and to move general apps into cloud or colo, he says. One driver for financial services is competition as new entrants who are without legacy infrastructure come into the market. Another is cost as traditional firms often operate from major metropolitan areas where real estate is expensive.

This is seeing a push to the edge for both owned and operated sites and through third parties. It is a prime market for Schneider for factory made micro data centres.

In addition to financial services another sector that is undergoing huge transformational change is telecommunications. It too is seeing new competition in the market and traditional incumbents know they must expand the breadth of their services or face the risk of being degraded to being pipe operators. But they must address the infrastructure needs to provide these new services.

"How much telco grade switching is there for branch side or regional offices? The telcos know they must somehow address the cloud requirements of their customers so that they not become simply providers of the pipes for everyone else’s services. They have huge infrastructure investments that need to be modernised over time. The obvious transformation would be to move from a traditional Telco environment to more of a software defined IT based environment. From this they can deliver telco services and cloud services and rich media services and content distribution much more effectively."

Shenoy offers the example of BT which is moving beyond being a network infrastructure provider. It wants to be a cloud provider and be a managed content provider.

"The telco sector is a good description of a market as a whole which is going in 2 or 3 directions at the same time."

Data centres at the edge
There was a school of thought from three years ago that the data centre industry would consolidate to a few mega scale facilities. That’s so far been proved correct for one part of the market for firms such an Google, Microsoft or Amazon where it has been a very viable model to build data centres of hundreds of thousands of square feet size operating at tens of mega watts of power.

But Shenoy says that the disruption for all these different players from financial services facing new competitors to telcos changing businesses to cloud providers is coming in the shape of IOT.

"Something that changes as the world develops towards IOT is how to take the 8.5 billion internet subscribers sitting today on an architecture that is largely big data centres and a many tiered network out to the edge. Today’s architecture is fine if we assume that those users are consumers of content, so it is essentially a one way traffic system."

But one of the IOT challenges is going from 8 billion to 50 billion devices. Most new devices will produce content, at which point the argument that the small collection of hyper-scale data centres is probably not the right way to define the need becomes more compelling.

"We will see very large infrastructure investments continue and as the telcos modernise, we will see the emergence of edge infrastructure. The telcos are going to be the next big entrants to put computing at the edge," says Shenoy.

Execution
Schneider describes its micro data centres as factory made, self contained, power, space and cooling enclosures which can be fully fitted out through partnerships with firms such as Cisco. NetApp, Vmware and IBM.

There are two market forces that Shenoy sees as key to the adoption of edge data centres.

Within end users what CIOs want is predictability, they want the same insight, resilience, security and performance at a local data centre level that they will get in a central data centre.

For Telco mass adoption there will be a requirement for standardisation. At scale the edge data centre will be commodity product and subject to those market forces.

Says Shenoy, it can be built to individual needs and then replicated at scale and the advances in IT type environments mean any shift for the telcos represents no greater risk.

"The way telcos deliver carrier grade is not in the infrastructure but in the management of the infrastructure and with micro data centres the management of the system and therefore the resilience is integral," he says.

Modular to Micro
Schneider Electric has been making modularised data centres for many years. A modular data centre is a pre-manufactured and factory assembled facility which has almost all of the attributes of a traditionally built data centre. It is built bespoke to match specific customer requirements. It can be housed in a standard shipping container, or a customised module depending specification and restrictions or requirements – these data centres are deployed everywhere from oil fields to military theatres to traditional data centre car parks and inside traditional data centres themselves. They house racks, come with different cooling options, different power density options and their main advantage is quality of manufacture, speed of delivery and comparative cost.
They are placed close to the action and have a huge number of applications.

But these data centres represent the first step on a modular journey that takes us to the micro data centre.

We know that there already exist a huge number of computers located in so many branch offices or office building computer rooms. These have tended to be built up over time and are owned and operated by the IT department with hands on, on-site management, monitoring and maintenance required. They take up valuable space, they are inefficient.

But plug-and-play micro data centres, even ones that can operate at 9kWs can be rolled into an office environment and start handling data with the minimum of set up. They can be monitored remotely. Schneider Electric believes this is where much future computing will be done.