As more companies look into fresh air to cool down their data centres, how efficient is this solution? Keysource released a whitepaper on this topic revealing that there is not a ‘one size fits all’ type of solution.

There are two different types of free cooling systems: direct and indirect. As for the first, the system filters fresh air directly into the colo building. The indirect way uses a plate heat exchanger, to achieve a fully re-circulating system.

The paper, based on a roundtable with industry experts, highlights that for colder climates, like in the UK, it makes more sense to use the direct approach, and for warmer environments the indirect.

The main problem with direct fresh air is the filtration of contaminants in the air, despite filters quality. Experts exemplified that in the middle of a seaport, industrial estate or other areas of high contamination, filtration would be more costly in order to prevent any equipment corrosion.

A direct fresh air approach was found to be more expensive from an initial CAPEX cost perspective, and full mechanical backup is required.

As for indirect fresh cooling, the free cooling and any mechanical systems can be integrated into one system. This will leverage the chances of total chillerless of the data centre and focus can be shifted away from air pollutants and contaminants to external ambient temperatures.

Both approaches can raise server inlet temperatures due to the way the air is controlled giving additional hours of free cooling.

Rob Elder, Director at Keysource, said: "With the continuing consumerisation of IT and the growth of cloud computing, demand for data centres continues to remain high. With so much demand for data, cooling remains high on the agenda and we see it as a priority to research new and innovative ways of cooling data centres."

Calculating the risks

In order to calculate risks, companies are not following a risk profile process, as this is hard to model. This means that risk studies are often carried out for individual colo sites but there is not a standard approach to the matter.

The use of fresh cooling comes down to the application of it, and how an organisation calculates the risk of opting for this solution.

The paper highlighted that 75% of data centre downtimes were due to human error, according to the Uptime Institute. Keysource therefore defends a more simple approach towards normal and operational failure, which needs to be implemented around design.

The paper explained that different organisations have different processes regarding risk and how they factor it into their decisions.

For example, large single application businesses with multiple facilities can keep their services running from a different data centre if one of their sites goes down. Organisations with high volume and critical applications or transactions are in a more delicate situation and cannot afford to lose any services due to the financial and operational implications.

Design considerations

The size of the data centre itself should not be an issue when opting for direct or indirect fresh cooling systems. It is the filtration of the air and the way air is delivered to every part of the data centre that should matter.

A company that opts for direct fresh air will have to install a backup system on the mechanical cooling, to promptly answer any risks the data centre may face.

An indirect system on its turn will not require its integration into the data centre infrastructure fabric reducing complexity and costs. These systems also demand less humidification when compared to a direct.

Direct fresh systems require more attention regarding WUE (Water Usage Effectiveness) as they rely more on water to provide capacity rather than simply improve efficiency.

Experts defend that many of the current data centres present a deficient cooling solution, and a direct fresh cooling approach is the easiest way to retrofit facilities, with older cooling systems being used as a backup.

Indirect fresh cooling could lead to increased ROI and revenues as it has a potential for zero refrigeration. This means that colo organisations could adopt this route in half of the USA, and cities like Madrid and Dubai. Although, companies must be prepared to use and rely on the usage of lots of water, and accept increasing server inlet temperatures during the warmer periods.

The chillerless data centre

The paper also focused on the new chillerless data centre adoption, which has not been built at the same pace of conventional solutions, and it is often set up in climate-specific locations.

Keysource exemplifies Facebook, Google and Yahoo! as users of this solution, but these companies have an ability to operate most of the year without chillers. These organisations can also shut down their facilities for a period of time unlike most data centres in the world.

The paper reveals that chillerless solutions will start to become more widely adopted, mainly in areas where indirect free cooling systems are deployed. Although, it also stipulates that these will not happen quickly, and that hybrid systems will be the norm within the industry.

Mr Elder added: "If one accepts the premise that direct fresh air needs a back up then today’s efficient systems are often fairly comparable from a pure energy consumption basis so it is more about performance and flexibility.

"What organisations need to consider now and some are already doing this is how to deploy and operate efficient data centres beyond the cooling. IT represents a huge part of this and aligning the utilisation of IT along with intelligent data centres it can make a massive difference.

"By deploying infrastructure with the appropriate tools an organisation can manage all of their resources across multiple environments. Using this information to optimise during operation they can maximise capacity utilising all resources in an efficient way based on a whole range of criteria such as environmental conditions, workload, power, cost and availability.

 

This is an extract from a Keysource Whitepaper "The Use of Fresh Air in the Data Centre" with original comment from Rob Elder. The document was put together after a roundtable session with Phil Collerton, MD EMEA at the Uptime Institute; Alfonso Aranda, Consultant at the Uptime Institute; Luke Neville, Senior Technical Lead at Colt; Andy Lawrence, Research Director at The 451 Group; Jim McGregor , Head of Engineering and Data Centre Management at Vocalink; and Mike West, MD of Keysource, as well as representatives from Operational Intelligence, Norland Managed Services and Fujitsu.