View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Data
April 2, 2015

Q&A: Scaling for the future with Dr Matthew Storey

The Lancaster University Systems Technical Coordinator talks to CBR on the institution’s latest cooperation with Hitachi Data Systems.

By

This week Hitachi Data Systems (HDS) announced work with UK top ten university customer, Lancaster University. HDS provided the institution with a complete infrastructure solution, supporting its business applications and research data storage.

The university estimates that its data will grow from 300TB to 4PB within the next few years, highlighting the need for flexibility to scale for predicted future data growth as recent changes in government policy mean the university is now mandated to retain research data for up to 10 years.

With the university’s environment approximately 90% virtualised, end-user services have seen marked improvement, allowing students, faculty members and administrative staff to work more efficiently, access the services they require, and maximise use of university research data.

Following on from this announcement, CBR sat down with Dr Matthew Storey to talk about Lancaster University’s storage programme.

JL: After the latest announcement by HDS regarding its work with Lancaster University, could you further this cooperation between the university and the HDS?

MS: We started our relationship with HDS about six years ago and we have been building on that relationship for quite a while. We did quite a major refresh about 12 months ago looking forward to the challenges that are ahead of us for the next four to five years.

When we originally started, this relationship was trying to find a way of supporting the consolidation and simplification of the IT infrastructure at Lancaster University. We started that, and quickly became successful, and after four to five years we realised we needed a bigger infrastructure to support the requirements.

Content from our partners
The growing cybersecurity threats facing retailers
How to integrate security into IT operations
How Kodak evolved to tackle seismic changes in the print industry and embrace digital revolution

We have done all of the simple operations that we can perform and had to get on and look at the challenges that lie ahead, namely the research data emanating from research activities of the university which produce massive amounts of data that needs looking after, storing away and access by many different people.

There is quite a large performance and a very large space requirement associated with that. We have been working with HDS for quite a while to scope that out and understand what that really means, what products from HDS were suitable for what we were trying to achieve.

JL: Can you explain more how the data will be stored?

MS: The infrastructure we have from HDS, we wanted to take a view supporting ourselves long term, so we wanted a solution that would provide a framework that we could build on. We ended up with the virtualisation product line from HDS, which gives us the ability to have a core infrastructure solution with high performance storage. We also aimed for other storage devices so they can all appear as one unified estate and enable us to grow very large data sizes.

The underlying infrastructure is that of a classic stand with virtualisation technologies in there, the VM product line from HDS, but also on top of that we have also brought in other technology such as the HDS content platform to enable us to take vast quantities of data that we are dealing with, store those away and replicate them and provide resilience in case of failure or any other activities that may take place on that data.

It was key to us to have that "scaffold" so that we can keep expanding the environment and react to the changing requirements that the research community have. It is hard to look into a crystal ball and see that far into the future. We have to be prepared for some changes to come along, a few good surprises.

JL: With 90% of the university’s data having been virtualised, when do you expect to action the remaining 10%?

MS: At the moment we are about 90% and we are picking away what is left. Unfortunately some things do not naturally fit inside virtual environments for various reasons, be that upload or capacity.

Whilst we are aspiring in our strategy of virtualised spur, there are one or two occasions where that doesn’t fit. Such could be very high load systems, which we cannot bring to our environment, or simply hardware requirements that cannot be virtualised.

There are one or two exceptions but we are genuinely looking to bring our entire infrastructure under that virtualisation banner by getting up and close to the rough 90% mark.

JL: We know that virtualisation generally has a positive effect in many ways for their users. How has virtualisation influenced costs and efficiencies within the university’s data program?

MS: It has been a very positive story. Obviously, we have an awful less hardware on site. We managed to draw in servers and infrastructure from elsewhere on campus, enabling us to have a simplified data centre infrastructure with less switching and all the devices that are associated with classical servers.

We are using less power because we are not running all these separate servers and other associated infrastructure. It has also simplified the management of that, because we are getting very high consolidation ratios partly due to virtualisation and partly because we can leverage so much more performance from the back-end storage that we envisioned to do initially.

It took four years for HDS’s flash media devices to deliver all the performance required by the university. It has been quite positive in that regard.

Furthermore, virtualisation has enabled us to bring back a lot of research instances in various subjects inside the university. Whether that be medical research and working with the latest technology there, or classical computer science, or any other subject that might require server or storage resource, students are able to very quickly bring up the environment and test against those with minimal effort. They can get themselves up and running quickly, leading to more accelerated approach to prototyping things.

JL: The data centre industry is now looking into the next big thing for data storage: cloud services. Is Lancaster University also considering moving to a cloud environment?

MS: We always actively explore that avenue and part of what we did was to leave the door open to hooking into cloud technology through HDS’s products. We are constantly reviewing what we do and what is best to host our data.

We evaluate each case on its merits based on the cost, security, functionality as well as what our users want from that because we are all about trying to deliver student experience as best as we possibly can.

If it is a use to the students, to the staff or to the researchers, to move something towards a cloud-based environment, we will happily have that, but it’s very much on a case by case basis.

Websites in our network
NEWSLETTER Sign up Tick the boxes of the newsletters you would like to receive. Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
I consent to New Statesman Media Group collecting my details provided via this form in accordance with the Privacy Policy
SUBSCRIBED
THANK YOU