A lot of the early hype around cloud computing focused on grand visions related to there being only 5 or 6 extremely large cloud providers across the globe. While public clouds continue to grow at a breakneck pace, private clouds are also starting to see immense traction, especially in key verticals like financials, SaaS providers, and telecom service providers.

Over time and through extensive trial and error, the marketplace is realizing that there are two key requirements for successfully implementing cloud computing:

  • Simplicity: This primarily refers to breaking down silos that have plagued IT departments of all sizes, allowing for a unified framework across compute, storage and networking.
  • Infrastructure automation: Ranging from automated provisioning to full lifecycle of infrastructure, implemented in a software defined manner. Often referred to as Infrastructure as Code, or Idempotent IT.

Simplicity and infrastructure automation have been extensively covered by leading IT analysts and, along with application-level paradigms like Hadoop, have often been referenced as the way to achieve the extraordinary scale and success of Web scale IT shops like Google, Facebook and Amazon.

But until now, having the entire set of components and knowing how to assemble and automate them effectively still required open source wizardry and high integration and build times, or the need to heavily rely on vendor-provided pre-integrated solutions that come at significant added costs and still require lengthy deployment cycles.

Both hyper-convergence and open networking are gaining popularity together, perhaps because customers are looking to address the high upfront CapEx issue across compute, storage and network as they look to optimize their entire data center investment, typically when going through a data center refresh cycle.

Hyper-convergence and Open Networking

Here’s some analysis of how they have come together to provide a compelling cloud computing infrastructure strategy:

  • The last decade has witnessed high rates of adoption of virtualization across data centers.
  • Managing and scaling virtualized infrastructure led to growth in storage-based architectures where local diskless servers booted from storage systems like SAN/NAS.
  • As virtualization frameworks have evolved and subsumed a lot of these capabilities, the need for dedicated storage appliances with lots of storage management options is going away. This has also led to the decline of dedicated, expensive storage networking, a trend that enterprise customers have welcomed.
  • Hyper-converged infrastructure vendors are storage companies at heart that are putting a distributed storage tier back into compute nodes and pooling storage across a cluster of machines. This is the approach many Web scale companies have taken, though usually focused on large-scale data storage and processing frameworks like Hadoop and not on enterprise applications.
  • Web scale networking has similarly focused on lowering the price per port, eliminating proprietary technologies and focusing on delivering a scale-out fabric. It is similarly transforming data center networking economics and evolving the architecture to a scale-out, Clos fabric-based approach.

For obvious reasons around agility, simplicity and scalability, and ultimately to improve the customer’s experience, hyper-converged systems benefit from having a complimentary scale-out networking layer. Choosing Cumulus Linux and bare metal switching for the networking component achieves all of these goals along with significant savings in CapEx. These savings can translate into additional SSDs and therefore greater performance, or additional nodes that equate to more VMs, a better DR plan, or simply, an investment into a faster network.

Cloud computing economics is all about the price-per-VM, and no matter how you slice it, using a Cumulus Networks-based fabric will improve this metric.

Learn more on Feb 25 during Nutanix + Cumulus Networks webinar – Deploying True Hyper Convergence with Open Networking