Containers vs. hypervisors: the battle is ongoing, but the two technologies don’t need to be pitted against one another—in fact, they each offer benefits that are more suitable for certain workloads than others.

Containers are considered resilient, in part, because they can be deployed both as classic monolithic applications as well as highly composable microservices. They are portable, and can be scaled up or down and deleted when no longer needed. Among many other benefits, containers pack more applications into a single physical server than a virtual machine (VM) is capable of, which means they are superior if you need the maximum amount of applications on a bare minimum number of servers.

When it comes to hypervisors in our current technology climate, their value seems to be slowly diminishing—and containers continue to enjoy a steady increase in popularity. Part of VM’s decline is due to resource allocation: they use a lot of system resources, requiring a full copy of the OS and a virtual copy of the hardware that the OS needs to run, while containers only need the supporting libraries required to run a specific program.

Furthermore, VM’s don’t provide the same level of portability, consistency, or speed that containers do for development, testing, and deployment purposes—all factors in an organization’s competitive position on the market.

The beginning of the end for hypervisors?

When Intel built many of the coveted capabilities of the hypervisor directly in the chip with its Intel-VTx instruction set, hypervisors began depending on paravirtualized drivers (PV) to boost performance—which compromised hypervisors’ dependability.

With inconsistency in quality and performance ranges from 5-30% depending on workloads, paravirtualized drivers just don’t have the operating capacity of bare metal. In addition to performance issues, PVs bring a host of cybersecurity issues to the table. Because each hypervisor and each guest operating system requires a different PV driver, the entire set-up requires a lot of code—which results in a broad attack surface for motivated attackers. Although a cyber attack of that severity is usually unlikely, this isn’t something most organizations want to risk. It’s better to be safe than sorry. Add to the security issue the need for a vast testing and support system, and you have an inefficient model—on paper.

VMs still serve several important purposes, and containers aren’t without their flaws. With our newest product launch, Cumulus Host Pack, we’ve attempted to alleviate many of the potential operational challenges with containers so that they become a more realistic choice for organizations who are deciding between containers vs. hypervisors.

Common container concerns and how we solved them

Containers allow companies to develop and compete at a rapid clip, and they promise unmatched flexibility and scalability—but they are sometimes challenging for certain types of workloads. As container adoption becomes more widespread, so do ongoing concerns about their security, management, and cost:

1. Security

Problem: It takes a lot of work to secure containers, and that security isn’t built in by default. Containers are certainly accessible, but this benefit turns into a detriment when dealing with security. They are difficult to track and identify, and can be unintentionally put on untrusted network segments. Plus, the number of containers and their inherent connection to critical resources often means an expanded attack surface.

Furthermore, without container visibility, you may unknowingly have containers running on ports that shouldn’t be running on or containers unexpectedly running on servers that aren’t yet secured for production. If a container is running on an compromised port, the entire operating system could be taken down and you wouldn’t know where the issue occurred.

Solution: Host Pack gives you detailed visibility into containers, down to the port. You get visibility into which container is running which service and on what server  So you can see where there may be a vulnerability and identify which application or container is causing it. This allows you to identify, fix and validate potential security issues.

2. Lots of moving parts

Problem: According to Rob Hirschfeld, OpenStack Foundation board member, “Breaking deployments into more functional discrete parts is smart, but that means we have MORE PARTS to manage. There’s an inflection point between separation of concerns and sprawl.”

With 25% of companies now running 10 or more containers simultaneously on a single system, the number of containers per host is rapidly increasing. Consider that in a container environment, updates and repairs must be multiplied by the number of containers—and the sheer volume of units may require a heightened level of management capabilities and additional resources.

If you’re managing your physical assets efficiently, this may not be an issue—but if your asset management system is barely keeping pace with the maintenance of physical machines, containers are going to throw it into a tailspin.

Solution: Host Pack also provides robust connectivity. So you can easily identify and manage each specific container. The stack is unified with one language, the same tooling, and cohesive reporting, making the network easier to manage. Now, you know how many containers of each service are deployed, what servers they are deployed on, and their relationship to the physical network.

3. Troubleshooting Ephemeral Containers

Problem: As discussed earlier, one of the advantages of containers is the ability to rapidly scale up and scale down containers on demand. Although a great feature to provide high availability it can create problems when trying to troubleshoot such a rapidly changing environment. Any existing intermittent problem in the infrastructure will only be amplified by the temporary nature of deployed containers.

Solution: Host pack provides the ability to look at all changes within both the physical network underlay, as well as the host and container environments. Both server and network administrators can easily see events within the infrastructure and correlate reported problems to container creation or tear down.

So which is the right choice?

Ultimately, it depends on what you’re trying to accomplish. For more help deciding between containers and hypervisors, or how to best implement both, reach out to us—or head over to our blog where we discuss the benefits of VMware and Linux containers. If you’re already operating in a containerized environment, check out our newest product, Host Pack, and see the technology for yourself with Cumulus in the Cloud.