Storage used to be the last part of enterprise IT which was considered. NAS and SANs were typically used for backups and replication of important data and were bolted onto the rest of the infrastructure. It was for data not required by applications instantly. The focus was on cost and ease of maintenance and it played little role in central enterprise technology planning.
Storage had to be cheap but reliable and for many industries was an important part of corporate governance and compliance.
But the bottle neck was in processing power – faster servers were more important than more quickly accessible memory.
That old storage world has disappeared into the cloud.
In house storage and back-up systems cannot compete with cloud providers on price or ease of use. You can call up more storage resources instantly and you don’t have to pay for capacity you’re not using and you don’t lose space in your data centre.
At the same time the demand from the business for storage closer to hand is growing at an ever accelerating rate.
New applications and platforms are creating vast quantities of data which needs analysing not just archiving.
Big data applications which need access to ever larger pools of information and Internet of Things projects all add to this flood of information.
Marketing departments are dealing with massive amounts of unstructured information from social media campaigns instead of neat spreadsheets from advertising agencies.
The application centred enterprise needs storage tied to applications – software-defined storage.
Software-defined storage removes much of the management complexity for IT managers. Systems today don’t require you to allocate space to applications, provision hardware or maintain spinning disks.
The key benefit of software-defined storage is removing these headaches and automating more management functions – something which is vital in order to survive the onslaught of additional demand from new applications.
It also allows storage to be allocated to applications as and when they need it.
As storage has become more important it has also become more flexible for users with less of the lock-in of older technologies.
Migration is still a cost, and a risk, but it is easier than it was.
The reality for most large businesses is that their storage function has evolved over time, in response to business needs and cost pressures. It is more likely to be a hybrid system which includes solid state, disks and cloud at the very least.
Managing this remains a headache but there are advantages to a mixed environment – not least a better chance of bouncing back quickly from a failure.
Looking forward this is likely to remain the case.
Whether the data centre of 2030 is using DNA storage, holograms or even individual electrons management of resources will still be an issue.
Memory will remain about cost, ease of management and an over riding need for ever faster growth.
These pressures suggest there will still be a mix of technologies in use. Assuming applications continue to consume ever growing quantities of data then the storage function will continue to grow too.
The data centre of the future will need more storage and it will play an ever more important role in turning technology into genuine business advantage.
This article is from the CBROnline archive: some formatting and images may not be present.