Say the words ‘data centre’ and many business people think of squat buildings, usually rectangular, somewhere inaccessible (especially in the US) or conspicuously hidden in plain sites behind high fences topped with razor wire (think London docklands or various industrial estates off the ring roads of every UK city.) The image above is of the Microsoft data centre in Dublin. It is an impressive engineering feat because of its scale.

These dull looking structures are designed at first glance to look like every other warehouse. There are loading docks and a reception area and lots of straight line cladded walls.

A data centre is built to get power from the grid to the IT equipment efficiently [but not so efficiently as to put that power supply at risk – known as uptime] and then to remove the heat generated by all those processor heat sinks, network switches and spinning disks.

They are industrial buildings built to a specified physical resilience for power and cooling, (N+N, N+2 think buying and deploying 2 pieces of power and cooling equipment to every requirement.)

Inside these buildings are metal racks on which sits IT equipment and around which you will find power switching gear, uninterruptable power supplies UPSs) Power Distribution Units (PDUs), chillers, air con equipment to move air around to take hot air away from the IT equipment and feed cool air (24o C to 27o C) to stop the IT equipment overheating.

Until relatively recently the people who were really interested in data centre operations were mechanical or electrical engineers.
A broader group of people were interested in the money invested.

Traditionally data centres were commissioned by end users with workload requirements, property investors wishing to deploy capital, or commercial data centre players with portfolio needs. These were conceived by architects, overseen by design engineers and built by contractors, all of whom are highly professional, talented people who have a reputation for rarely seeming to agree with one another about what’s best.

Capital Investment

Again until relatively recently it was a given that in capital deployment and investment terms big end user company boards of directors would sign off on big projects because for the last three decades IT was vital to business and therefore money was always made available to build data centres and stuff them with IT equipment.

That has all now changed. This is due to a number of factors. These include the availability of cloud services, delivered from purpose built ‘web scale’ data centres by huge internet companies (see below). Another is the requirement for compute processing and storage to be done closer to the user and the consumer, so called edge computing.

A data centre need no longer be a 25 year capital and construction project. These days enteprises, except those that are in the commercial data centre industry or cloud services sector, tend to be less interested in owning a depreciating asset designed and built to house IT equipment that no-one, and I mean no-one, knows will look like in five years time, never mind 25 years.

Up to date

So a few years ago manufacturers began looking at ways to industrialise data centres.

As we’ve said a data centre is a shell which contains equipment to power and cool IT equipment so it should be relatively straightforward to build it in a factory and drop it where needed.

Thus containerized data centres were born. Initially built using industry standard ISO shipping containers – 20ft or 40ft.

Then specialists realized that for deployments often required bespoke design to accommodate physical restrictions or hazardous conditions, think oil rigs or mining. This meant manufacturers built up reference designs on how containers might need to be modified or built to order using pre-manufactured components.

And as IT equipment packed more punch in smaller form factors thanks to Moores law, server virtualization and flash storage, so began the miniaturization of the data centre itself.

Mass Production

Ask yourself: Which of the following is mass produced? A server? A switch? A storage box? The unit in which they are housed? The answer is of course that it depends on the context. A mainframe server is not a commodity, but an intel x86 pizza box server almost certainly is. The same applies to the storage box, or switch. And it certainly applies to a data centre. A server farm requiring 40 mega watts of power and several hundred thousand square feet of space is hardly a commodity but a purpose built box to provide 10kW to IT equipment brings the commodity data centre somewhat closer.

As mentioned above speak of traditional data centres and what comes to most people minds are the vast halls of white space full of racks and rows either full or waiting to be filled with servers, network and storage equipment.

These huge facilities will continue to be built and will act as centralized hubs for big companies such as Microsoft, Google and Amazon (and will increasingly process and store corporate data).

At the other end of the scale a micro data centre is a pre-manufactured enclosure which can host all of your edge computing needs. All the components are pre-built in a clean factory environment and fully integrated to provide a complete solution.

Some micro data centre models can be rolled into office environments and with not much more than a plug and play installation can be working in minutes. Others require specialist installation, can scale and can be deployed in secure ruggedized shells outside buildings.

Schneider electric micro data centre

 

The components that make up the data centre, the rack mounts, the (Uninterruptable Power Supply) UPS, the power distribution unit (PDU), the cooling equipment free ventilation or forced through DX cooling and sensors for temperature and humidity control are specifically designed for micro dc deployment.

In the past there often existed a mis-match between IT workload requirements and the engineered solutions needed to accommodate them. The build it and fill it approach. Industrialization has meant that data centres can be built for the workload.

The data centre sector has moved from a large, monolithic designs and builds through phases of modular components being deployed within big data centres, then the data centres themselves became modules. Until finally we arrive at the micro data centre.
Today’s requirements mean that all sorts of data centres will be built for all sorts of workloads. One size does not fit all.

White Paper: This white paper offers a ‘Quantitive Analysis of a Prefabricated vs Traditional Data Center

Prefabricated data centres are growing in popularity, especially for smaller Edge of Network installations thanks to the proven benefits of speed of deployment and scalability. The provision of ever more reliable reference designs and customisable, pre-assembled and factory-tested modular building blocks further enhances their appeal.

The ability to match the size of a data centre to the immediate load required, with the ability to expand or add capacity as necessary, is a strength of adopting the prefabricated approach. However, the issue of capital expenditure (CAPEX) can be confusing with opinions divided as to whether prefab implies a more expensive outline investment than a traditional approach for a data centre design and build.

A new White Paper from Schneider Electric, a global specialist in energy management and automation, offers insight into the complexities of the issue by using a systematic comparison of the capital costs involved in building two data centres with identical capacity, levels of redundancy, power and cooling architectures, density and number of racks; one built using prefabricated modules and the other built by traditional methods. By isolating the capital costs from other variables, the paper allows a direct comparison between the two approaches to be made.

CAPEX costs include those associated with materials, design site preparation, equipment installation and commissioning. The paper analyses two 440kW data centres, each based on a documented reference design. Both installations used the same major components such as UPS systems, chillers, racks and rack power-distribution units (PDUs).

In each case the space consisted of 44 IT racks each capable of supporting an average of 10kW/rack of IT load. Hot aisle containment was used to optimise airflow in the space, whilst the exact arrangement of racks, coolers and PDUs varied between the two designs. For the prefabricated centre, two dual-bay IT modules were used whereas the traditional centre comprised a single large IT room containing all 44 racks and the supporting infrastructure.

Third-party data centre modelling software from Romonet was used to perform the capital cost comparison. Assumptions were made, for a given location, for the cost of labour and for the difference in price between vacant land (for the prefab) and finished building space for the traditional data centre.

Materials for prefabricated data centres cost more as they are shipped with physical structure pre assembled and include the physical housing or containers as well as the factory integration work. The largest material premium for the prefabricated approach was for the IT room, followed by the cooling system.

On the other hand, onsite labour for prefab is cheaper as most of the integration work has already been completed in the factory. The cost of space, determined largely by the difference between vacant as opposed to developed real estate, represents a big saving for using a prefabricated solution.

Sensitivity analysis reveals that there are two key variables affecting the capital expenditure, namely building cost and average power density/rack. At higher densities the savings achieved by using the prefabricated approach increase as more load can be housed in the same fixed module space. As density decreases, more modules are needed to house the same IT load and so the additional material overhead diminishes the savings.

For the example studied, capital costs were found to be 2 per cent cheaper overall for the prefabricated approach with lower space and labour costs offsetting the greater material cost incurred.

The study concludes that there are many variables to be considered when assessing the capital costs of either approach and so each deployment should be considered on its particular merits.

Click below and choose your country to access the full white paper.

The White Paper 218 entitled "Quantitative Analysis of a Prefabricated vs Traditional Data Center"