Containers are unlike any other compute infrastructure. Prior to containers, compute infrastructure was composed of a set of brittle technologies that often took weeks to deploy. Containers made the automation of workload deployment mainstream, and brought workload deployment down to minutes, if not seconds.

Now, to be perfectly clear, containers themselves aren’t some sort of magical automation sauce that changed everything. Containers are something of a totem for IT operations automation, for a few different reasons.

Unlike the Virtual Machines (VMs) that preceded them, containers don’t require a full operating system for every workload. A single operating system can host hundreds or even thousands of containers, moving the necessary per-workload RAM requirement from several gigabytes to a few dozen megabytes. Similarly, containerized workloads share certain basic functions – libraries, for instance – from the host operating system, which can make maintaining key aspects of the container operating environment easier. When you update the underlying host, you update all the containers running on it.

Unlike VMs, however, containers are feature poor. For example, they have no resiliency: traditional vMotion-like workload migration doesn’t exist, and we’re only just now – several years after containers went mainstream – starting to get decent persistent storage options for containers.

This meant that during the initial adoption phase of containers, they were only really good for composable workloads: those workloads whose configuration is defined in text files, and whose creation can thus be automated. Containers became the go-to workload deployment technology for IT movements such as cloud native, DevOps, and Continuous Integration/Continuous Development (CI/CD), which sought to regularly update applications, and to design these applications to tolerate failure.

These modern application development techniques, while not exclusively focused on containers, generally sought to create applications which did not presume that the underlying IT infrastructure was always going to be available. Only data storage was considered sacrosanct; the application itself, and storage of the application’s configuration, were regularly expected to be lost, so applications were designed to be created and destroyed in an automated fashion. This is why containers ended up at the center of the most recent push for IT automation.

Container Sprawl

IT automation has always had one predictable effect: the easier you make it to wrangle workloads, the more workloads that organizations will deploy. This always has domino effects that ripple out from whatever it is you’re automating to affect the rest of the IT infrastructure.

The transition to containers has created three primary effects:

  1. Each individual server can host more workloads using containers than it could using VMs
  2. Organizations deploy more workloads once they’ve made the jump to containers
  3. Organizations more regularly create and destroy containerized workloads than they do ones in VMs or bare metal servers

One consequence of this is that containers change the rules for other aspects of IT operations, most notably networking. While IT automation certainly existed before containers, most organizations weren’t regularly creating or destroying VMs in bulk, nor were they doing this to bare metal servers. In some organizations, containers are created and destroyed by the thousands every day.

Automating Automation

Manually configuring networking for workloads isn’t realistic once companies have started to truly embrace containers, and IT automation along with them. VMs live longer, and are thus easier to track. Containers are ephemeral; not only are they frequently destroyed on one host and re-created on another, but this sort of activity often takes place across infrastructures.

A container destroyed on-premises might just as easily be restarted on a public cloud. Similarly, because the applications that get put inside containers are usually designed to be automatically scaled as needed, an application living in a container might go from occupying one container to occupying hundreds, and then back down to one again. Increasingly, this sort of scaling is happening across infrastructures, with components of a single application located on-premises, as well as in multiple different public clouds at the same time.

If it isn’t clear already, once containers enter the mix, serious discussions need to be had throughout the IT team about manual configuration and management approaches vs. automation. This includes serious discussions about network automation.

In most organizations, network configuration for the predominantly static workloads that traditionally populate VMs and bare-metal servers is slow and bureaucratic. It can take IT hours – sometimes days, or even weeks – to assign resources.

Networking sometimes feels like the last holdout on the automation front. Virtualization automated compute. Software-defined storage solutions like hyperconvergence automated storage. Networking is increasingly the last major bottleneck; manual networking is a serious impediment to containerization, and the completion of the IT operations automation journey.

Like virtualization management solutions, containerization automation and orchestration platforms – most notably Kubernetes – reduce some of the burden. Kubernetes automates networking within individual container host operating systems, and can coordinate virtual networking across container host clusters.

The network automation gap for most organizations is the physical network. At the end of the day, all those bits get shuffled over copper and glass, and someone – or something – has to tend to the bit pipes. This is where Cumulus comes in with NetQ. Keeping the bit pipes operating – and connected to the workloads that need them – is what Cumulus does, and by making the automation of the physical networking infrastructure simple, Cumulus enables network automation, unlocking the potential of containers, allowing them to be used how they were intended to be used.