Who controls containers: developers, or operations teams? While this might seem like something of an academic discussion, the question has very serious implications for the future of IT in any organization. IT infrastructure is not made up of islands; each component interacts with, and depends on, others. Tying all components of all infrastructures together is the network.

If operations teams control containers, they can carefully review the impact that the creation of those containers will have on all the rest of an organization’s infrastructure. They can carefully plan for the consequences of new workloads, assign and/or reserve resources, map out lifecycle, and plan for the retirement of the workload, including the return of those resources.

If developers control containers, they don’t have the training to see how one small piece fits into the wider puzzle, and almost certainly don’t have the administrative access to all the other pieces of the puzzle to gain that insight. Given the above, it might seem like a no-brainer to let operations teams control containers, yet in most organizations deploying containers, developers are responsible for the creation and destruction of containers, which they do as they see fit.

This is not as irrational as it might at first appear. Systems administrators are trained predominantly to minimize risk. Developers are trained to be creative, and try new things. Containers don’t have the resiliency of infrastructure options like virtual machines (VMs), so the applications that live in them are designed to be more tolerant of infrastructure failure. In other words, it’s (usually) OK to take more risks with containerized applications, making them ideal for developers.

VMs and bare metal servers are better managed by operations teams, and are best suited for applications which aren’t designed to cope with fallible infrastructure. Containers shouldn’t be treated like VMs: not in how they’re managed, nor in which workloads reside inside them.

Container Strengths

Perhaps the biggest advantage of containers is that they can live anywhere. Containers can live inside a VM, in the public cloud, or on bare metal servers. They can have dedicated infrastructure designed for their strengths and weaknesses, or they can hitch a ride on whatever’s available.

Containers are largely created and destroyed via automation. This can be through scripts, or using automation/orchestration solutions such as Kubernetes. Kubernetes, which is not only automation and orchestration for containers, but has evolved to become a complete container lifecycle management solution, works on any infrastructure, and has more or less won the war for container management.

The ability to run a container anywhere, and do so with minimal overhead, makes containers especially useful to developers. Developers can prototype applications on their own notebooks, or on an on-premises server, then deploy a copy of the working code into production with unparalleled ease.

The composable design of containerized applications makes running canary groups for application updates easier, and makes staging phased rollouts to production a straightforward matter of scripting and/or orchestration. These capabilities are already largely baked into Kubernetes.

That Network Thing

The challenge with containers is that, fundamentally, they’re not that much different from any other way to run a workload. Like virtual machines and bare metal servers, containers rely on networking to allow workloads to talk to the rest of the world. Workloads which don’t talk to the rest of the world have rather limited usefulness.

The ability to get deep network visibility into containers is vital. Network visibility allows operations teams to ensure workloads are operating as expected, and also meeting performance requirements. Network visibility is also a big part of how security teams ensure that unauthorized access or other security compromise events are occurring.

Unfortunately, network visibility into containers is something of a challenge. To start with, the low overhead of containers means that many more of them can be packed onto a server than is generally possible with VMs. The ability of containers to run under multiple environments – inside VMs, in the public cloud, or on bare metal – also makes keeping an eye on them tricky. The number of layers between workload and the network monitoring products is variable.

To top it all off, containers move. Because Kubernetes can work anywhere, containers in the real world tend to work everywhere. Containers are also leading to the sharding of applications: large monolithic applications are being broken down into microservices, and those microservices might be scattered about multiple infrastructures. A database that lives on-premises might feed web servers on multiple public cloud providers, all living behind load balancers provided by third party service providers, just as a simple example.

Obtaining and maintaining useful network visibility requires being able to peer into traffic that transits physical networks, virtual networks, service provider networks, and public cloud provider networks. These networks need to be able to reconfigure themselves dynamically to adapt to the perpetually moving and changing containers, and they need to be able to ensure that all containers which work together to form a single service can communicate, all while ensuring that nothing that shouldn’t be communicating with those containers managed to communicate with them.

When operations teams had complete control of infrastructure, provisioning workloads and IT infrastructure resources was slow and bureaucratic because keeping track of all of these details isn’t easy. Now that containers have empowered developers, network automation has moved from a nice-to-have to a must-have.

There is no going back. The genie doesn’t go back into the bottle. Containers are here to stay, and the reality of the world is that developers own them. Developers are hired to experiment, and to try new things, and that is exactly what they are going to continue to do. Keeping up will require software-defined networking (SDN).