Attention managers of virtual machine infrastructure. Containers are coming. For some they are already here. It is not the end but it is a new chapter. Be warned and be ready.
That’s the thing with infrastructure in a rapidly changing technology environment. Just when you think you’ve got a new thing nailed down and normalized in production, along comes a new, new thing and everybody is saying the formerly new thing is over.
Such is the case with processor virtualization, the sort of thing enabled by a hypervisor such as VMware vSphere or Microsoft Hyper-V. The way you got efficiency and resiliency for the growing sprawl of commodity servers in the datacenter was to virtualize those servers (as virtual machines or VMs) and consolidate them on fewer physical hosts.
We are coming to the end of that movement. Most organizations with large numbers of x86 (Windows and Linux) servers are today majority virtualized. Many tell Info-Tech they are 90% or more virtual.
But now comes this thing called the application container and the container host platform (the most well-known being Docker). Wrapping an application in a container is said to be more efficient and lightweight than wrapping it in a VM. Further, to host a bunch of applications in containers on a server you don’t even need a hypervisor.
So is virtualization over? Far from it. Virtualization is just beginning. The mega trends we’ve seen in IT infrastructure over the past decade or more are continuing. These big trends include:
A container is just another form of abstraction. Where a hypervisor divides up, or partitions, a single physical machine into multiple virtual machine, containers partition a single operating system into multiple instances. The abstraction is just at a different layer.
OS abstraction has been around as long as machine partitioning into VMs. In the past VMs have had an advantage over containers in that they were more portable. As each VM had a complete OS installed it could be copied from host to host to host. This changed with advent of container platforms like Docker.
Docker extended the idea of a container to the concept of a “shipping container for code” that promised frictionless deployment and optimum portability. Now you can package up just the OS services that the application depends on and move the packaged container to another computer running the same operating system and the Docker platform.
Proponents of containers over VMs will point out that containers are more lightweight than VMs as they do not contain the full operating system, only the bits necessary to make the application run on a given OS. This also means that infrastructure management and development will be able to function with less operational overlap.
In an ideal world, infrastructure operations would focus on the availability, capacity, and performance of a homogenous platform. Developers would focus on building and configuring the application. It would be a frictionless process in that there would be no need to establish requirements and approvals for a server (even a virtual one). When the application is ready it is simply moved to the appropriate host.
When an application is “wrapped” in a virtual machine, that machine has all the maintenance requirements (such as configuration and patching) of a physical machine. Overlap in the accountability for the maintenance of that VM is a source of friction (and possibly contention) in operations.
That highly efficient virtual server infrastructure you have been building and tending this past decade is far from obsolete. For all the hype, containers remain an emerging technology choice and it is not an either/or decision.
In that ideal world, the infrastructure would be a homogenous grid of commodity servers. This is what cloud infrastructures look like. The real world of the corporate data center is more heterogeneous. VMs have moved from the next big thing to legacy investment.
VMs are also better at heterogeneity, where multiple OS versions are hosted. Containers are better suited to a single server type and OS. The current investment in virtualization also includes investment in mature management and governance tool sets for the infrastructure. A 2015 survey by StackEngine (later acquired by Oracle) found 49% of respondents listed their chief concerns with containers as security and operational tools maturity.
In order to protect and leverage current investment in virtualization while exploring the potential of containers, the near-term strategy is to host your emerging container infrastructure on virtual machines. Hosting a container on a VM may at first seem redundant and resource wasteful, but it is the best way to take advantage of containers while ensuring enterprise-level security, reliability, availability, and scalability.
Abstraction in the form of virtualization and software defined isn’t going anywhere. But one form of abstraction, the server hypervisor, has peaked in terms of market penetration and mainstream adoption. Future infrastructures will be 100% software defined but that doesn’t mean 100% of servers will need hypervisors. Your container strategy should focus on a hybrid future to bridge from legacy to new style virtualization.