Continuing Integration and Continuing Development (CI/CD), and containers are both at the heart of modern software development. CI/CD developers regularly break up applications into microservices, each running in their own container. Individual microservices can be updated independently of one another, and CI/CD developers aim to make those updates frequently.

This approach to application development has serious implications for networking.

There are a lot of things to consider when talking about the networking implications of CI/CD, containers, microservices and other modern approaches to application development. For starters, containers offer more density than virtual machines (VMs); you can stuff more containers into a given server than is possible with VMs.

Meanwhile, containers have networking requirements just like VMs do, meaning more workloads per server. This means more networking resources are required per server. More MAC addresses, IPs, DNS entries, load balancers, monitoring, intrusion detection, and so forth. Network plumbing hasn’t changed, so more workloads means more plumbing to instantiate and keep track of.

Containers can live inside a VM or on a physical server. This means that they may have different types of networking requirements than traditional VMs, (only talking to other containers within the same VM, for example) than other workloads. All the while, they have the same networking requirements that VMs do.

Containers themselves don’t live migrate between servers; but the VMs they live on might, and that can present problems, such as tracking MAC addresses and IPs for multiple containers inside a single VM as that VM moves between physical hosts. Containers can be also destroyed and recreated by the thousands, posing new challenges.

In some cases, it’s important to associate a given IP address or MAC address with a specific data set, even though the container (and thus the applications operating on that data) are destroyed and recreated elsewhere. Containers are also far more likely to be built from configuration files using an Infrastructure as Code (IaC) approach than are their VM predecessors.

Infrastructure as Code

Not getting things wrong means automation and orchestration. Humans are prone to error, and that’s before factoring in the dramatic increase in both the number of workloads and the frequency of change once an organization’s adopted modern development approaches. A CI/CD developer regularly updating their application, which involves destroying and recreating multiple containers, can dramatically increase the frequency of change for network administrators.

Today, automation and orchestration of IT infrastructure increasingly falls into the category of IaC. Kubernetes, Terraform and many other IaC applications read a configuration file, typically a YAML file. This YAML file can contain all the details about a workload, and elements of the underlying infrastructure, from the configuration of the individual application, all the way down to the physical network.

This assumes, of course, that all the various infrastructure elements support automation. You can’t, for example, register a workload’s new address with the firewall if the IaC application in use can’t talk to the firewall.

Dynamic behavior like this, though, inevitably leads to complexity. As soon as it’s possible to automate and orchestrate the entire lifecycle workloads, we stop caring about things like “where are those workloads being placed?” Instead of placing all workloads that need to share secure backend communications on the same host, we might allow those workloads to be spread across multiple hosts or even multiple clusters.

Increasingly, organizations rely on workload schedulers to determine workload placement, perhaps restraining that placement in some way – such as grouping workloads that form a single service – and perhaps not. It’s increasingly common, for example, to run some of a service’s workloads on multiple public clouds, as well as some of those workloads in on-premises data centers.

Ensuring secure communication between these workloads requires complex networking. These workloads may be united through VPNs, layer 2 tunnels, gateways, proxies, and more. There are a seemingly limitless number of options today. No organization can afford to pay network administrators to set up and tear down these connections every time a microservice is updated and reinstantiated, or when a workload is added or moved.

Software-defined infrastructure, which by definition includes networking, is no longer a nice to have. It’s an absolute must for those organizations that wish to be able to effectively provide infrastructure for applications using modern development approaches, such as CI/CD, containers, and microservices. As the bit that connects all the other bits, the place to start on this journey is the physical network.