There’s been a lot of talk about container networking in the industry lately (heck, we can’t even stop talking about it). And it’s for a good reason. Containers offer a fantastic way to develop and manage microservices and distributed applications easily and efficiently. In fact, that’s one of the reasons we launched Host Pack — to make container networking even simpler. Between Host Pack and NetQ, you can get fabric-wide connectivity and visibility from server to switch.

There are a variety of ways you can deploy a container network using Host Pack and Cumulus Linux, and we have documented some of them in several Validated Design Guides discussed below. Wondering which deployment method is right for your business? This blog post is for you.

Docker Swarm with Host Pack

Overview: The Docker Swarm with Host pack solution uses the connectivity module within Host Pack, Free Range Routing (FRR) in a container. The FRR container runs on the servers and uses BGP unnumbered for Layer 3 connectivity, enabling the hosts to participate in the routing fabric. We use Docker Swarm as the container orchestration tool for simplicity.

Choose this deployment if:

  • You’re looking for the easiest and simplest container deployment possible.
  • You don’t mind using overlays and NAT.
  • You are able to install software on the hosts and have at least two leaf (top of rack) switches connected to each host for redundancy.
  • You are very comfortable with Layer 3 and realize the deficiencies of MLAG, STP and large failure domains.

How it works:

When configured, Swarm enables VXLAN tunnels between the hosts for multi-host inter-container communication as shown by the red dotted line. We set up the host’s loopback address as the VTEP , and advertise the VTEP directly into the routing domain from the host via eBGP unnumbered. This enables Layer 3 redundancy between the containers, the network and each other. We access the outside using NAT, as denoted by the yellow dotted line. We access other containers via VXLAN, as denoted by the red dotted line.

More information on this solution can be found in the full validated design guide.

container network

Docker Swarm with MLAG or single attached hosts

Overview: The Docker Swarm with MLAG or single attached hosts solution terminates Layer 3 at the leaf (top of rack) switches and runs Layer 2 to the hosts. MLAG can be deployed from the hosts to the leaf switches, or the hosts can be single attached. We use Docker Swarm as the orchestration tool for simplicity.

Choose this deployment if:

  • You’re looking for the easiest and simplest container deployment possible.
  • You don’t mind using overlays and NAT.
  • You are NOT able to install additional containers (FRR) on the hosts
  • You have 2 Top of Rack (ToR) switches running MLAG for redundancy or have single attached hosts.

How it works:

Docker Swarm enables VXLAN tunnels between the hosts for multi-host inter-container communication. We set up the VXLAN VTEP as either the IP address of the hosts single attached ethernet, or it is the IP address of the hosts bond in the case of MLAG. The ToR(s) advertise the IP subnet of the host facing bond or ethernet (which is also the VTEP) directly into the routing domain via eBGP.

More information on this solution can be found in the full validated design guide.

container network

Host Pack: Advertising the Docker Bridge

Overview: The FRR container, the connectivity module within Host Pack, advertises the Docker Bridge into the routing domain. This deployment option uses an FRR container on the servers with Layer 3 connectivity and eBGP unnumbered – enabling the hosts to participate in the routing fabric. It is best deployed with dual attached hosts to enable Layer 3 redundancy. We directly advertise the Docker Bridge subnet into the routing domain without any NAT or overlays. It does not dictate a specific orchestration tool.

Choose this deployment if:

  • You want a container networking system that operates independent of the orchestration tool.
  • You do not have tight constraints on IP address usage for containers or you are using private addresses.
  • You prefer to avoid using overlays and NAT for higher performance and ease of troubleshooting.
  • Your containerized applications support Layer 3 connectivity.
  • You are able to install software (FRR routing) on the hosts to avoid difficulties with MLAG and STP and prefer smaller failure domains.
  • You have 2 or more Top of Rack (ToR) switches.

How it works:

In this solution, we use a customer configured IP subnet (with an appropriate mask size – depending on planned number of containers per host) for the Docker Bridge. We dictate a private or public IP subnet when setting up the Bridge. Each host’s Docker Bridge must be configured with a different subnet.
We then use FRR within Host Pack to advertise that Bridge subnet into the routing domain for connectivity from either the outside or containers on other hosts.

More information on this solution can be found in the full validated design guide.

container network

Advertise Container Addresses into the Routing Domain with Host Pack Features

Overview: The Advertise Container Addresses into the Routing Domain with Host Pack Advertises deploys Host Pack’s FRR container and the Cumulus Container Advertiser on the hosts. It uses eBGP Layer 3 connectivity to the hosts. It is best deployed with dual attached hosts to enable Layer 3 redundancy. We directly advertise the containers /32 IP addresses into the routing domain without any NAT or overlays. It does not dictate a specific orchestration tool, however, a centralized IPAM must be used to assign the container’s IP addresses.

Choose this deployment if:

  • You know which orchestration tool you like and you can integrate it with a centralized IPAM.
  • You have limited IP addresses for containers and need to conserve them (Anycast IPs, Public IPs, Constrained internal IP address space for container deployment).
  • You prefer to avoid using overlays and NAT for higher performance and ease of troubleshooting.
  • Your containerized applications support Layer 3 connectivity between themselves.
  • You are able to install additional containers (FRR and the Cumulus Container Advertiser) on the hosts to avoid difficulties with MLAG and STP and you prefer smaller failure domains.
  • You have 2 or more Top of Rack (ToR) switches.
  • You can summarize IP addresses at the edge.

How it works:

We deploy Host Pack’s FRR container with eBGP unnumbered for redundancy. We also deploy Host Pack’s container advertiser to advertise the containers /32 IP address into the routing domain. We use the same configured IP subnet on all hosts in the data center and use proxy arp to allow multi-host container to container reachability. This solution conserves IP address space and allows containers to be destroyed and redeployed on a different host using the same IP address and not requiring a different subnet per host.

This solution requires a centralized IPAM that can work with the orchestration tool to ensure container IP addresses are not duplicated in the network.

More information on this solution can be found in the full validated design guide.

container network

Advertise Containers /32 Address with Redistribute Neighbor

Overview: In this solution, redistribute neighbor is used on the leafs to directly advertise the containers’ /32 IP addresses into the routing domain via eBGP unnumbered from the leaf switches. No overlays or NAT are used and no extra containers are needed on the host. We directly advertise the application containers’ /32 IP addresses into the routing domain without any NAT or overlays. It does not dictate a specific orchestration tool, however, a centralized IPAM must be used to assign the containers’ IP addresses.
Choose this deployment if:

  • You know which container orchestration tool you like and you can integrate it with a centralized IPAM.
  • You have limited IP addresses for containers and need to conserve them (Anycast IPs, Public IPs, Constrained internal IP address space for container deployment).
  • You prefer to avoid using overlays and NAT for higher performance and ease of troubleshooting.
  • Your containerized applications support Layer 3 connectivity (containers can be on different subnets).
  • You develop your own application containers and/or are able to get the container to GARP or ping upon spin up.
  • You are NOT able to install additional containers on the hosts.
  • You have single attached hosts.
  • You can summarize IP addresses at the edge.

How it works:
In this solution we deploy the macvlan driver on each host with the same IP subnet on all hosts. On the leaf switch, redistribute neighbor is used to advertise the container /32 addresses into the routing domain. Redistribute neighbor uses the leaf switches ARP table to learn the container IP addresses and advertises them into the routing domain. This means that containers need to GARP or ping upon bring up to announce themselves. Proxy ARP is used to communicate between containers on different hosts.

This solution allows containers to be destroyed and redeployed on a different host using the same IP address, and conserves public IP address space. The macvlan driver also has better performance than Docker Bridge, but only supports single attached hosts.

More information on this solution can be found in the full validated design guide.

container network

Conclusion

No matter how you’re deploying your container network, you can get robust connectivity (and visibility for that matter) with one of the above solutions and Host Pack. If you’re still not sure which solution is right for you, you can test out the technology in your own personal, pre-built data center with Cumulus in the Cloud. You can plug and play with common configurations and build out a virtual container network easily and for free. Check it out.