Edge computing deployments need to be compact, efficient, and easy to administer. Hyperconverged infrastructure (HCI) has proven to be a natural choice for handling compute and storage at the edge, but what considerations are there for networking?
To talk about edge computing it helps to define it. Edge computing is currently in a state very similar to “cloud computing” in 2009: If you asked five different technologists to define it, you’d get back eight different answers. Just as cloud computing incorporated both emerging technologies and a limited set of established practices, edge computing does the same.
The broadest definition of edge computing is that it’s any situation in which an organization places workloads inside someone else’s infrastructure, but isn’t at one of the major public clouds. This comes with the caveat that the major public cloud providers are, of course, heavily investing in edge computing offerings of their own, muddying the waters.
Traditional IT practices that fall into the realm of edge computing today include colocation, content delivery networks (CDNs), most things involving geographically remote locations and so forth—the “edge” of modern networks. But edge computing also covers the emerging practices of using mobile networks (among others) for Internet of Things (IoT) connectivity, and placing IT equipment inside the networks of other organizations. The latter example is popular in manufacturing, where large manufacturers wish to maintain oversight of the contract manufacturers they engage.
In practice, edge computing is usually defined by one of two use cases. The first involves collecting more data at the edge of the network than can reasonably be sent to the core for processing, necessitating compute capacity at the edge to filter inputs before they’re sent to the core. The second involves making workloads (either complete applications, or only microservices that are part of a larger application) available at the edge, because latency to connect to the core is simply too high.
In both use cases, more “oomph” is required at the network edge than was common 10 years ago, which is often a problem, given that physical space in edge computing locations comes at a premium.
HCI shines at the edge
Without onsite IT staff to handle change requests, it’s important to pick an edge computing environment that’s capable of dynamicity, has ease of management, and can be managed in a programmatic and/or composable fashion so that infrastructure as code can amplify the ability of existing IT teams to cope with increasing scale and complexity. By combining compute, storage, and least-connection networking, HCI has proven to be an excellent solution to these problems.
This is especially true in shared infrastructure locations, such as micro-datacenters at the bottom of cell towers, where extreme space constraints are in play. Here it’s common for HCI to be preferred not only because it’s a compact way to run multiple workloads, but because the virtualization and containerization technologies that HCI relies on allow micro-datacenter owners to share physical resources among multiple organizations. Given the space constraints, it’s often the only practicable solution.
HCI has networking considerations. Storage traffic between nodes is high throughput and needs to be as low latency as the laws of physics allow. Multiple organizations sharing infrastructure where space is a critical constraint also means that virtual networking (in flavors) is a given, while workloads existing in someone else’s datacenter mean that SD-WAN is an absolute necessity for security purposes.
All of this is to say that networking at the edge is hard. It’s complex, frequently contains multiple network overlays, has to physically fit into a small space, and has to be capable of coping with the rapid changes that are part and parcel of technological innovation anywhere resources are scarce.
Automation, orchestration, management
None of this is happening without automation. And considering everything in IT is increasingly automated, an orchestration platform is a must in order to get a handle on all the automation. This places an emphasis on the management, monitoring, and analytics capabilities of the networking products chosen by both organizations placing workloads at the edge, and those chosen by service providers offering edge computing capabilities.
Here, open networking matters, but for reasons that extend beyond the traditional talking points of interoperability, vendor independence, and programmability. Open networking matters because it means one networking operating system can work on multiple different switches. This in turn means a single management plane can provide orchestration of all networking automation regardless of where that networking happens to be.
With open networking hardware, vendors are free to experiment with different physical form factors and combinations of capabilities. A single operating system can work on them all. Integrations between HCI platforms (which have their own management platforms, with their own orchestration and APIs) and network management only have to be done once, but can cover a wide range of possible use cases.
In other words, open networking is important to edge computing because things change so fast at the edge that organizations need networking that can keep up, something that industry-leading networking struggles to do. There’s no time for decades-long standards wars, or incompatible APIs at the edge.
At the edge, latency matters. To your applications. To organizations trying to evolve their offerings. And to the pocketbooks of everyone involved.