Sign up for our newsletter
Technology / Cybersecurity

Containers debunked: DevOps, security and why containers will not replace virtual machines

The tech industry is full of exciting trends that promise to change the face of the industry and business as we know it, but one that is gaining a huge amount of focus is containers.

However, problems lie with the technology and threaten to root itself deep in the mythology about it, namely the misconceptions over what the technology is, what can be done with it, and the idea that they replace virtual machines.

Lars Herrmann, GM, Integrated Solutions at Red Hat spoke to CBR about five common misconceptions, but first the benefits.

Herrmann, said: “Containerisation can be an amazingly efficient way to do DevOps, so it’s a very practical way to get into a DevOps methodology and process inside an organisation, which is highly required in a lot of organisations because of the benefits in agility to be able to release software faster, better, and deliver more value.”

White papers from our partners

The second benefit is that the technology is a practical way for an organisation to adopt cloud, not so much from a technology point of view but because containerisation makes it practical and relatively straight forward to embrace the cloud and the notion of things as a service and elastic infrastructure.

This is because containerisation defines an operational model and a set of workloads that nicely lends itself to this paradigm of self service and elastic shared infrastructure, the GM said.

Lastly there is the relation to the uptake in conversation around microservices, taking apps and re-architechting them and breaking them down into individual services. This helps to build scale into the application, along with availability.

Herrmann said: “But the other advantage is about the ability to evolve and make changes to one part of the application without necessarily changing and impacting all the other parts so it’s the ability to isolate change which in turn can lower the risk that is inherently associated with every change and then you can make it faster, more freely delegate inside the organisation.”

So we have the benefits, but misconceptions such as the technology being new and therefore requiring work to figure out what they do is making them complex.

There are some new pieces to containers such as the notion of the image, but containers have been supporting operating systems for decades, Linux and Unix for example.

The image though is something that creates a lot of opportunity, “because at least in theory it opens the door for treating a container much like an aggregated application package, I can take an application as a whole, build it in one place, run it in one place, move it to another place and run it there again and that is what is so exciting – this portability promise,” he said.

The problem previously was that developers build something in one place, put it in another and then find it no longer works.

In theory, Herrmann says, it promises that an image doesn’t change as it moves from point A to point B, this can be achieved but it’s not out of the box.

“That’s one of the misconceptions, containers are not universally portable just because of the way they work. They are basically a process running on an OS environment and most importantly there’s almost no application other than maybe ‘Hello World’ that consists of a single process or a single container image,” said Herrmann.

The portability of an application as a whole is largely defined as to whether all the various microservices or container images can be moved from one location to another and whether or not they will still behave in the same way. This, he said, should never be taken for granted because it depends on the underlying container platform that enables the capabilities for the container to run.

Another misconception is that containers can replace virtual machines, they may look similar on the surface but they solve different problems.

Virtualisation can solve the problem of how much utilisation can be generated on a physical infrastructure, the physical server, and the CPUs memory.

Containers do not allow users to define a software defined fabric to increase the utilisation, virtualisation extracts the underlying physical world and provides a software defined infrastructure environment, containers don’t.

A container is running inside an OS instance and an OS takes care of running the underlying hardware but doesn’t provide a virtual hardware environment into the container.

One of the implications of this is that containers are lightweight because they don’t need to contain everything that is needed to support the virtual hardware environment. “They can be smaller in image, smaller in surface, and also they have less of an performance impact because there’s no overhead,” he said.

This is why a container can effectively be launched as quickly as a process, or application.

Basically, containers solve a different problem to VMs.

Another popular conversation around containers is how much isolation actually exists in containers and whether they can be trusted to run on a system where other containers and people are.

Herrmann said that while the level of isolation in the underlying infrastructure is a legitimate question, it is not the most important.

The real security issue with containers is not isolation but what is actually inside these images.

The issue is that the IT operations team is getting container images from developers and not knowing what is in them, which is dangerous.

The solution is to use scanning technology in an automated fashion so that action can be taken.

On the subject of security for the container host fabric the GM said that most have got a pretty high degree of maturity but the runtime capability is slightly more mature.

What people need to remember is that if they produce for example a Docker container and publish it in Docker, then they have entered the operating system business.

Herrmann said: “Because with build run you inevitably hardwire which OS components in which version are part of your container, most are not aware of the implications of this. You would build a container today and put it on Docker and you would feel good, but three days down the road you have seen five more security issues so unless you are taking action and fixing your container, now you have put a stale piece of software with known security issues for other people to consume with no warning.

“In this scenario the market is not very mature.”

While containers are not new they are often being treated as such, which has created confusion about how to treat them, what they should be used for, and what are they replacing. Clearing up the misconceptions about the technology will be vital for adoption to continue in a fashion that doesn’t lead to problems down the road.
This article is from the CBROnline archive: some formatting and images may not be present.