Part 1 – Examining the existing network

In my previous post, I was playing around with Cumulus in the Cloud (CITC) and how it was integrated with OpenStack. Now that I was playing with OpenStack in CITC, I wanted to dive deeper into the networking specific technology.

In this blog post I will be discussing how I leveraged a flat network to initially create simple instance deployments. Then I’ll dive more deeply into how I created a VXLAN network for my OpenStack instances to create more scalable east-west communication. In the previous post I used the CITC console as my primary interface for configuration. This time I will be using an SSH client and the direct SSH information, as the outputs I’m gathering have wider width that is easier to obtain via an SSH client.

To do so, I just clicked the SSH access button on the right hand side of the GUI. This provided me with the username, password and IP address that would allow me to use my own SSH client to connect to the CITC infrastructure.

For the uninitiated, here is a great intro doc into OpenStack networking. In addition, my colleague Eric Pulvino pointed me towards this awesome OpenStack network deployment guide that provides greater insight into OpenStack packet forwarding.

To start off, let’s define the difference between a flat and vxlan network using the explanation in the Intro to OpenStack Networking:

  • Flat: All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation takes place.
  • VXLAN: VXLAN is an encapsulation protocol that creates overlay networks to activate and control communication between compute instances. A networking router is required to allow traffic to flow outside of the VXLAN tenant network. The router provides the ability to connect to instances directly from an external network using floating IP addresses.

Now getting back to where I left off on my previous blog post, we launched an instance named cirros01 and assigned it to a network called provider. In case you forgot, here is the command we used:

OpenStack networking

This instance was accessible directly from all the servers and had internet access. To start, I logged into my instance:

openstack networking
Then I checked the routing table:

openstack networking
And the IP addresses assigned to my interfaces:

openstack networking

I can see that the IP address 192.168.0.104 is assigned to my instance on interface eth0.

I wanted to learn more about this network assigned to my instance and to do that I needed to use the neutron microservice.

I listed all the networks currently available in my OpenStack instance:

openstack networking
I wanted to dump more information regarding this provider network:

openstack networking

Here’s a special note. To gather more information regarding this provider network, look at it with the admin-openrc sourced:

openstack networking
Notice that there is a new field in this output called provider:network_type and this field is labeled as flat. I’ll address this after I look at the subnet information more.

Being a network engineer, I wanted to get more information regarding the IP address information assigned to this network. Again, leveraging neutron I got the subnet information:

openstack networking

openstack networking
There’s a few different things we can pick out here. The subnet assigned to the provider network is 192.168.0.0/24, with the range of addresses assigned 192.168.0.100 to 192.168.0.150. Also, there was a gateway and DNS information in that range. All this information aligned to the instance routing table and IP address information we looked at earlier.

Now, back to the L3 agent and network type. We saw earlier that the provider network is defined as a flat network. Let’s dive into this into a bit more detail. When a flat network is created on a compute node, the compute node absorbs the external facing NIC into a bridge, then assigns the IP address to the bridge instead of the NIC. In the case of the CITC OpenStack demo, the management IP address of 192.168.0.31, which would normally have been assigned to eth0, has been reappropriated to the brq Linux bridge interface. Any instance deployed using the provider network now is bridged through this brq interface so that the VM is in the same flat L2 network as the out of band management network of 192.168.0.0/24.

openstack networking

In the below diagram, this means that the instance is using the blue connection at the side of the server instead of the grey connections used for the dataplane.

openstack networking
Having looked at all this output via CLI, I wanted to cross reference it against the Horizon console. To do that I first accessed the CITC webpage and launched the Horizon GUI:

openstack networking

Then from there, I logged in using the default domain and username/password as demo/demo. I then accessed the networking tab in my horizon console:

openstack networking

I then accessed the networking tab in my horizon console:

openstack networking

This showed me the cirros01 image I deployed connected to the provider network. Notice how this VM is directly connected as we expect since the provider network is a flat L3 agent.

openstack networking
In the next part of this blog, I start investigating the creation of a new VXLAN network.

NOTE: The outputs for this blog were gathered across multiple days and simulations in CITC, so some of the interface naming conventions may not line up between blog posts. The fundamental commands and outputs are correct, but the system generated naming conventions may not align.

Part 2 – Creating a new network

My next step was taking a crack at creating a new VXLAN network within neutron. This new network would be used for VMs to operate in a more traditional sense with tenant isolation. The following steps I used the neutron microservice. The commands should be the same as the openstack network command:

openstack networking
I then added a subnet to this demonet I created:

openstack networking
Notice that since I didn’t set a range for this subnet, neutron autoselected every IP address in the range that wasn’t the gateway.

Next, I created a demorouter that would act as my intermediate way for my network to get out to the world.

openstack networking
Then I added my demonet as the inside interface and the provider network as the gateway to the router:

openstack networking
We can query the router now to verify all its configurations were done correctly:

openstack networking
We can see that NAT is enabled, and the router has two interfaces:

  • One on the newly created private VXLAN network
  • One on the previously created flat network

To verify the network, let’s switch back to sourcing from admin-openrc and check the network configuration.

openstack networking

Notice how this demonet is of network_type vxlan whereas our previous provider network was of network_type flat. This means that any instances connected to this demonet will be able to communicate with each other through VXLAN. This is how OpenStack ensures that VMs running on multiple different compute nodes can still be within the same subnet. Any east-west traffic is encapsulated in VXLAN and the actual physical network only sees the encapsulated traffic. This encapsulation is done on the vrouter we created called demorouter.

At this point I’m ready to launch a new VM that’s using this new demonet I created:

openstack networking

We know from the above that the VM is connected to a private network called demonet and east-west traffic will be encapsulated in VXLAN. But what about external routing outside its subnet? To allow this functionality, I needed to allow my new demoinstance to access my provider network. To do this, I had to first pull a floating IP address from the provider network:

openstack networking
Then assign that floating IP address to the instance:

openstack networking
Once we have that, we can check to see that our floating IP mapped to our demoinstance properly. First, check the floating IP allocation:

openstack networking

Here we see that the floating IP we pulled from the provider network of 192.168.0.106 is mapped to the fixed IP address of 200.0.0.4. But who owns 200.0.0.4? We can query our demoinstance to verify that it is the actual owner:

openstack networking

I can now go into the horizon console and take a look at a visual representation of all my CLI work. Using the same steps in my previous post, I can see the following network topology:

openstack networking
As you can can see, my original cirros01 image is there, directly connected to my provider network. And my new demoinstance is connected through my new private demonet I created, and connected to the external world through the demorouter I created.

Mind you, all this was possible through the horizon console but like I started with, I wanted to get more familiar with the command line of OpenStack.

If you’re ready to start playing around with Cumulus in the Cloud for yourself, click here to get your personal, pre-built virtual data center up and running!