I am continually amazed at the productivity of the Cumulus Networks development team who recently collaborated with the Mellanox development team to do some amazing things in this new Cumulus Linux 3.1 release. Besides innovating on a number of important new software features, they added support for five new switches from Mellanox, including the first native 25 Gigabit Open Ethernet switch as well as the highest capacity 10/100GbE switch on the market.

 

Screen Shot 2016-08-22 at 7.54.46 AM

The Mellanox SN2410 is the industry’s first generally available switch with native 25 Gigabit Ethernet (25GbE) ports. Working for *the* provider of 99% of all 25GbE NICs sold worldwide, I can say with confidence that, until now, all 25GbE servers have been connected to 100GbE switch ports via breakout cables. The SN2410 changes all that by providing 48 SFP28 ports that can natively operate in 1G, 10G, or 25G modes, which is great for cutting-edge deployments while providing backward compatibility for legacy devices. Just like 10GbE SFP+ ports, the 25GbE SP28 ports can utilize inexpensive passive copper direct-attach cables.

Who needs 25 Gigabit Ethernet?

Without my naming names, you can safely assume that the hyperscalers and other early consumers of 10GbE are now moving to 25GbE to avoid the cost and complexity of bonding multiple 10GbE links. Also, without breaking any NDA’s, you will soon see the majority of server vendors offering 25 Gigabit Ethernet NICs as the standard I/O option in their latest 2-socket and 4-socket servers. This is not just for future-proof connectivity – these new servers provide a sufficient number of cores that they can be bottlenecked by 10GbE connections.

Cloud Service Providers are leveraging these faster servers to increase the VM density per server which increases their profitability. However, increasing VM density also increases I/O demands, so we see Cloud Service Providers moving to 25GbE as it is more cost effective than moving to 40GbE.

Telco’s and Financial Services are also migrating to 25GbE. Telcos investing in NFV applications benefit from faster than 10G connectivity, whereas Financial services need 25GbE to stay ahead of the competition.

Flash storage needs 25GbE – especially those spending their hard-earned coin on NVMe-based storage. In our internal benchmarks, Mellanox labs found that a server with a single NVMe flash drive could out-run a 10GbE port. This means people investing in NVMe are wasting their Benjamins if they don’t upgrade their network connections to 25GbE. Below is a benchmark of four CEPH nodes, first with 10GbE, then with 25GbE:

Screen Shot 2016-08-22 at 7.55.03 AM

Faster Network = Higher CEPH Performance

The aggregate performance of the 25GbE nodes resulted in 92 percent more bandwidth and 86 percent higher IOPs than the 10GbE nodes.

What if I am happy with 10 Gigabit Servers?

For folks who are well served with 10GbE to the server, we built the SN2410B, which has the claim to fame of being the highest throughput 10/100GbE switch on the market as well as the most affordable, with an aggressive MSRP that is less than half the price of the nearest Arista competitor. Soon, there will be many 10/100GbE switches on the market, but they will all share a common trait; just six 100GbE ports. The SN2410B has eight 100GbE ports, which besides providing 33 percent additional uplink capacity allows for higher server density as any spare 100GbE ports can be used for additional server connectivity at 10G, 25G, 40G, or 50GbE speeds. This is an important feature for high-density server deployments of more than 48 servers in a rack.

It might not be obvious, but a 10/100G network is significantly less expensive than a 10/40G network. Consider a common data center deployment with 16 racks of highly available 10GbE-connected servers with a typical requirement of 2:1 oversubscription at the ToR:

Screen Shot 2016-08-22 at 7.55.20 AM

Building this network with legacy 10/40GbE switches requires six uplink ports on each ToR switch, which adds up to 192 aggregation switch ports. A modern 10/100GbE solution only needs two 100GbE uplinks per ToR and just 64 aggregation ports. 100GbE does more than just reduce cable counts; it reduces the number of switches because fewer aggregation ports = fewer aggregation switches. In this case, a customer could either use six legacy spine switches or two modern spine switches. I know what I would choose…

Half width switches:

Screen Shot 2016-08-22 at 7.55.33 AM

Network density must match server density, and the Mellanox SN2100 and SN2100B half-width switches bring a new level of compactness to the networking market. Two of these switches can be mounted side by side in a standard 19-inch server rack, consuming a single rack unit of space while providing up to 128 ports using breakout cables. Customers with high-density 10/40G applications can leverage Mellanox SN2100B, the first half width 40G switch supported by Cumulus Linux. Higher speed 25/100G applications will be better served with the Mellanox SN2100, the first half width 100GbE switch supported by Cumulus Linux.

These switches bring a webs-scale IT innovation to more traditional data centers: break-out cables. If you were to walk through the mega-data centers in the Pacific Northwest, you would not find many traditional SFP+ DAC cables. Instead, you would find 10GbE servers connected using SFP+ to QSFP breakout cables where the ToR switches have dense QSFP style connectors. Besides the density advantage, there is a significant cost advantage – using breakout cables will typically save customers over $1000 (street price) per server rack. That is in addition to the savings associated with a half width switch, which at half the size = half the sheet metal, half the PCB material, half the fans, half sized power supplies and yet? All the performance. In fact, because we use the same Spectrum ASIC on the little SN2100 switch that we use on the big switches, the SN2100 has the highest buffers per port of any switch in its class.

Mellanox and Cumulus Networks technology is deployed at some of the largest web scale-IT operations in the world and we see a growing set of operators actively modernizing their infrastructure. Together we make running modern business applications possible.

So stop with the 10/40G switches. Every time you get asked about 10/40G, you should ask about 10/100G -it is the biggest no-brainer in networking today. Anyone building a new data center buildouts should consider 100GbE.

Get hands-on experience with Cumulus VX and see the results yourself!

Disagree with the need for 25GbE or 100GbE? Feel free to leave your comment below. We would love to have a discussion!