Many networking solutions purport great Opex savings through automation, simulation and continuous integration. Similarly, there is a school of thought where network designs will have a single point in a network perform multiple roles. This will short change an initial Capex cost of purchasing additional switches with the intention of overlapping features on that single device.

Let’s take the simplest example. We have a 3 rack environment with dual-leaf per rack and 2 spines for inter-rack connectivity. In this design, we are leveraging VXLAN as the data plane overlay with BGP/EVPN as the control plane. Additionally, all 3 racks are compute, leaving no additional leafs to act as the service/border/exit leafs.

A network designer will look at the infrastructure and try to overlap features by repurposing the spines as exit leafs. Why will they think this way, you ask? Well, this is only an 8 switch design. Spending money on an additional 2 switches to act as dedicated border leafs uplifts my capex cost by 25 percent! I would then be required to buy 10 total switches instead of 8.

So instead, we end up overlaying the VXLAN onto the spines. So now the spines act as both interconnections between the leafs for rack-to-rack L2 extension, but also as border leafs, terminating VXLAN connections for the outside world. While on paper this sounds like a reasonable solution, it adds an additional layer of operation complexity. Any time a change is made to the VXLAN underlay, this leads to a scenario where configuration changes are required on the spines, which is an added risk to the overall stability of the infrastructure.

In a traditional spine/leaf architecture, the spines are architected to be pure L3 devices. They only see IP routed packets, and are designed to be “dumb”, to only perform layer 3 routing. In this modified design, the spines are working double duty. As such, we’re layering more features, and more complexity which means more risk of interop issues or failure, essentially leading to more fabric wide risk in the overall design.

The above risk doesn’t include the fact that normally spines support a different feature set than the leafs because of the hardware ASICs that are traditionally positioned for spine switches. The ASICs in spines are normally designed to be throughput proficient which leads to a decrease in supported features. Whereas leafs are designed to be feature rich and support server connectivity, which leads to a decrease in aggregate throughput. Even taking the whole hardware difference between leaf and spine devices off the table, it still creates a fundamental risk to interoperability.

Now let’s do some math on the whole situation. The alternative to collapsing features onto the spine is to use dedicated exit leafs. The list price of an open networking Trident3 switch usually runs around $16,000 USD list. Financially, that means that you’re saving $32,000 USD by not purchasing an additional pair of exit leafs. So here’s the real question: over the lifetime of the solution, do you expect to save $32,000 USD by trying to collapse a sustainable and supportable solution into one that’s harder to operate and manage?

A little aside: keep in mind that the savings in Capex can be offset by the complexities introduced into Opex. In our suboptimal design, we reuse spines as exits. If we’re talking Broadcom chipsets, spine switches tend to use Tomahawk ASICs, which are considered “speeds & feeds” hardware. The older switch hardware that had Tomahawk ASICs used to support VXLAN termination using internal loopback ports. The newer switch hardware that uses the Tomahawk ASIC doesn’t even support VXLAN termination, which may make this entire option infeasible. As a result, we’re already forcing features onto a box that doesn’t want it, shifting Capex savings into Opex costs.

Now back to the math. Assuming the lifetime of the data center is 3 years (which is a conservative estimate), that ends up being a meager ~$10,500 USD per year. According to Indeed.com, the average salary of a network engineer is $90,000 USD and $140,000 USD for a CCIE. For this example, let’s be conservative and only use the average network engineer salary.

At $90,000 USD, approximately 240 workdays a year, that is approximately $46.88 USD/hour salary. If the list price of a pair of border leafs is $32,000, then you are saving at most 680 hours of engineering time. That is essentially 85 workdays. Does collapsing the network device list equate to 680 hours of outage, troubleshooting or problem isolation over the course of 3 years? That is about 28 days a year or 2.5 days a month.

Long story short, if it was my network, I’d always have dedicated devices for their roles rather than trying to collapse functionality. For example, this architecture is my preference:

I prefer this since I fundamentally do not believe in shifting capex savings into opex costs. In my personal experience, the time savings of problem isolation alone makes the capex investment upfront much more worthwhile.

Do you like what you read and are interested in more blogs by me? You can find them all here along with more information on all our product offerings here.