Sign up for our newsletter
Technology / Hardware

Does Containerisation Spell the End for Virtualisation?

Virtualisation has enabled organisations to create IT services using resources that are traditionally bound to hardware, utilising a physical machine’s full capacity by distributing its capabilities among many users or environments, writes Martin Percival, Solutions Architect Manager at Red Hat.

You might, for example, have three physical servers with individual dedicated purposes, running at just 30 percent capacity. With virtualisation, you can split one server into separate, distinct, and secure environments known as virtual machines (VMs). If you split the server to increase its use from 30 percent, to 60 percent, to 90 percent, you would have empty servers that could be reused for other tasks or retired altogether to reduce cooling and maintenance costs.

Does Containerisation Spell the End for Virtualisation?
Martin Percival, Solutions Architect Manager at Red Hat.

Not only has virtualisation enabled companies to partition their servers, but it has also allowed them to run legacy apps on multiple operating system types and versions.

The rise of Kubernetes and Containerisation

With the advent of Kubernetes, businesses have been carefully considering what VMs offer in comparison with Linux containers, which represent another evolutionary leap in how we develop, deploy, and manage applications.

White papers from our partners

A Linux container consists of one or more processes that are isolated from the rest of the system. All the files necessary to run them are provided from a distinct image, meaning that Linux containers are portable and consistent as they move from development, to testing, and finally to production. This makes them quicker than development pipelines that rely on replicating traditional testing environments, and helps to ensure that what works on a developer’s laptop also works in production.

See also: SUSE’s OpenStack Move is a Clear Reminder: Kubernetes Has Won the Cloud API War

While it might take minutes to spin up a fully-functional VM as each one contains a traditional operating system, the time to get a container moving is typically seconds. This increased agility also helps with scalability as individual services can be spun-up as required. The smaller size of containers also brings efficiency benefits, as they can be more densely packed into existing hardware, reducing costs for computing infrastructure, whether in the cloud or on-premise.

Connecting, securing and scaling such services becomes a hard task at large scale, so Kubernetes was created as an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerised applications and makes it easier to take new architectural approaches, such as using microservices, to build scalable solutions. You can cluster together groups of hosts running Linux containers because Kubernetes helps you easily and efficiently manage those clusters and where individual containers are run for performance and high-availability.

Virtualisation may well still have a place in handling hardware environments, but it’s important to see the rise of Kubernetes and container technology as an evolution in the way we are able to develop, deploy and run applications more consistently and efficiently across both in-house and public cloud infrastructures.

The Benefits of Container-Native
Virtualisation during the Transition to Containers

Companies with existing virtual machine-based workloads that cannot be easily containerised – who have adopted or want to adopt Kubernetes – might want to consider container-native virtualisation. This technology provides a unified development platform where developers can build, modify, and deploy applications residing in containers and VMs in a shared environment.

With container-native virtualisation, teams that rely heavily on existing VM-based workloads can containerise applications faster. Existing virtualised workloads can be run directly in the same environment as newly developed containers, allowing early interoperability with older solutions.

Over time, the older VM-based applications can be split into a set of smaller services, each in their own container, allowing more flexible scaling of services as and when needed. In this way, businesses can move forwards with a container strategy, without having to implement a big-bang conversion of existing applications before being able to get started. This in turn, can bring forward the successful implementation of digital transformation goals.

Have Containers killed Virtualisation?

Virtualisation has always maintained a strong reputation for security and isolation of running VMs on a single machine, and there are many who point to this as being a key reason for their continued existence. As containers effectively share the same underlying operating system, there has been a question about the potential for cross-container attacks if one becomes compromised.

In reality, there have always been known exploits for virtualisation solutions and containers can now use a variety of techniques to provide security and isolation, so perhaps the remaining benefit that a VM provides is its ability to run multiple different operating systems.

There are some clear cut cases where containers deliver significant benefits: for developer flexibility, for consistency of operational environments, and for scalability of running applications. While it’s hard to see the complete death of virtualisation, these factors provide a compelling argument for every business to examine containers as a way to deliver solutions in a hybrid-cloud world.
This article is from the CBROnline archive: some formatting and images may not be present.

CBR Staff Writer

CBR Online legacy content.