This week, London played host to one of the first data centre summits of the year, the Data Centre South Summit 2016.
The conference saw several companies in the data centre industry make their way to the Barbican Centre for a day of collocation and hosting talks.
CBR lists five main techaways from the summit.
1. Data centre efficiency boosted by deploying edge hubs in commercial buildings
Internet use is booming and shows no signs of stopping, according to professor Ian Bitterlin from Leeds University.
He said that the world has gone from generating 690TBs of data every month in 2001, to producing 811,000TBs a month in 2015.
"This is a 1,200x growth, which is actually exceeding Moore’s Law."
The answer? Deploying edge data centres into commercial buildings everywhere.
Bitterlin said: "Looking at edge data centres, what about every hotel in London having 200 to 300 KW small data centres in their basement instead of connecting to a large facility out of town?
"Smart cities are coming but they are dependent on data centres. The idea of commercial buildings becoming distributed data cenres is not as weird as it sounds."
To cool down the smaller hubs, Bitterlin suggested the use of liquid cooling.
He said that as governments speak of broadband for all, they are not realizing the huge pressure they are putting into data centres, from power to workloads.
"Fast broadband for all? Which idiots have not looked at the fact that if you give people fast broadband, they are going to use it?"
To exemplify how internet consumption is creating record breaking power needs in the data centre infrastructure, Bitterlin said that PSY’s YouTube video Gangnam Style, when it counted just over 1.2 billion views, consumed 298GWh in just one year, around 100 million litres of fuel oil. Today the video has over 2.5 billion views.
"Data centres consume 2-3% of our [National] Grid capacity and that is currently growing at about 15-20% CAGR.
"Data centres are zero efficient; they are not energy efficient, everything that comes in goes out, and there is a lot of heat [wasted]."
In the UK, £123.61 in business is enabled by one data centre kWh, according to Bitterlin.
Taking on his idea of data centres in London’s hotels, he said that a hotel could benefit from 100 to 200kW of continuous heat to heat a hot water system.
And what does the future of the internet look like afterall? Bitterlin said: "Super-conduction microelectronics is probably the future of the internet."
2. Edge does not mean micro data centre and what can it be used for?
Schneider Electric took to DCSS 2016 the topic edge data centre, something the industry is starting to seriously consider as more ‘things’ get connected and data loads increase at astonishing rates.
Today, there are two billion internet users, 21 billion network devices and 1.3 million video views per minute, said Tony Day, global director for data centre projects at the company. "Data is growing tremendously (…), however, there is a global trend towards net neutrality which could slow things down."
To answer this, the industry is coming up with localised computing systems. Edge computing will develop platforms to distribute computing loads closer to the devices’ location.
Day said: "Edge does not mean micro data centre, it could be a gateway or an embedded device, could be a regional data centre, or it could be an one to eight local racks or micro data centre.
"Should also not be confused with converged IT, which should also not be confused with micro data centres. Converged is generally for larger facilities."
Applications where micro data centres are being used include enterprise and collocation, manufacturing, retail and industrial.
As an example, Day used the work Schneider Electric is doing in La Sagrada Familia, in Barcelona, which now has its own on-site data centre.
Day said the challenge with the monument still under construction, was the fact that equipment needs to be relocated every three years, the spread of deployment for reduced latency, and the harsh environment the hub has to sit in due to dust.
Schneider then installed 2x non ISO 25′ micro data centre blocks, ten racks at 4kW/rack upgradable to 8kW/rack, a DX overhead fan coil cooling and symmetric PX UPS.
3. Selecting colo providers is not (just) about the money
One of the top seminars at DCSS 2016 addressed how businesses can choose their collocater based on eight key factors including technical standards, location, pricing, service level, proximity, connectivity, hidden costs and planning for the future.
Aaron Whitehouse, CMO of hosting firm Vorboss, said that traditionally, business has focused on the connectivity model of the hub, however, there is a lack of data centre providers out there and also a high focus on presence could be prejudicial.
To answer this, the industry has come up with some modern connectivity solutions, including managed services providers (MSP) offering networks on site, capability to quickly scale bandwidth and choice of on-net data centres.
Whitehouse said: "For the majority of people to be that close to what they need to connect to, is not that relevant. Getting a good quality link to that point is easier than getting to some certain points in the data centre."
Second on the list was technical standards around power resilience, high-density loads, fire suppression and security.
He said: "You expect the data centre to have reliable power. Then you have to consider security, then other things such as power density. You need to make the decision that there is enough growth for you in the future when you are planning to enter that data centre."
Moving on to location, Jonathan Arnold, MD Volta Data Centres, said that location is something people cannot change, "unless you are Microsoft and put stuff in the sea".
When looking at location, important aspects to take into account are proximity between the site and the business’s location, parking space so customers can come in, and transport space.
Arnold said: "Commercials are absolutely key. You are not going to pay twice as much to be in a specific location, but is cheap the way?"
Whitehouse then said that there should be some warnings around prices because people sometimes look at what looks the same on paper and mix some costs. "A lot of the offering from data centres is not equal, they will have different costs, different approaches. If you know how to play with the system a bit they will rejig to suit your needs."
Adding to this, there is the service level aspect, where businesses need to look at competitive service-level agreements (SLA), a track record of the operator and cancellation rights.
Whitehouse said: "It is really hard to buy data centre space if you do not have the familiarity of the local market where the data centre sits."
Arnold also said that transparency is absolutely necessary for a long term partnership, and that companies need to be attentive to hidden costs, cross connected costs and foreign exchange rates.
Lastly, businesses need to future proff and chose a site that allows them to scale as they grow, without the need to move from colo to colo over the years.
4. Object storage as an answer to the data storm
The world of IoT is expanding and is only going to generate even higher waves of data. By 2020, 35 zettabytes are expected to have been generated, mostly driven by machine data. This will be followed by interactions data, human files and transactional data, according to Cloudian.
Taking to the stage, Neil Stobart, EMEA technical director at Cloudian, said that data pressure will also be added as companies will be legally obliged to retain that information when it comes to data laws, and "that is where big data analytics comes in hand".
Storage needs are evolving, squeezing the traditional model, he said, as object storage is around 95% of the world’s data today.
He said that object storage is like an X-ray image. "For example, in a picture you do not just store a picture, you store lots more information about it such as the date when it was taken, where, image owner, image name, and so on."
According to Rackspace, object storage offers access via an API at application-level, rather than via an OS at filesystem-level.
Looking at what makes object storage attractive for unstructured data, Stobart said that it is the fact that it is inexpensive, scalable and ease to access over internet protocols.
It is also self healing, multi-tenancy and object level security, built for efficiency, self service access, global distributed access and has data and metadata stored together.
5. We live in a mission critical age
Most data centres will claim uptime of 99.95% to 99.99999%. However, even the slightest downtime could have a tremendous impact not only on the colocater but also on those deploying in that data centre.
Andy Bailey, solutions architect from Stratus Technologies, spoke at the summit about how downtime is not an option anymore. To show how critical uptime has become, he mentioned Chancellor George Osborne’s remark that "this year, the economy is mission critical".
Bailey said: "We are relying on software that is doing all the monitoring [from control centres that overlook the data centre], and that software has to run properly to avoid the data centre failing.
"The goal is to ensure as near-perfect uptime as possible. Conventional technologies do not provide the needed uptime and fail to prevent downtime."
He said that standard servers have single points of failure that cause downtime and data loss, and take between one to four hours to fix.
"Natural way to improve that is to cluster these systems together, still management complexity adds risk and expenses. Scripts must be executed and it is about failure recovery, not failure prevention."
Lately, high availability virtualization, something that has grown over the last couple of years according to Bailey, is not optimised for availability and it is very similar to a cluster which can still experience failover.
"We then have to talk about availability levels. A conventional server is available 99% of the time representing an average yearly down time of 87 hours and 40 minutes."
He then said that public cloud services, which promise around 99.5% of uptime, average 43 hours and 50 minutes of downtime, while those that have 99.99% availability are on average down eight hours and 46 minutes every year.
High availability clusters with 99.95% availability have a downtime yearly average of four hours and 23 minutes. Virtual fault tolerance software with 99.995% uptime have 26 minutes and 18 seconds downtime.
Continuous available systems are some of the most reliable today with 99.99% of uptime, representing a minimal downtime of five minutes and 16 seconds.