Virtual private networks (VPNs) provide security when remote workers access corporate networks, but they’re notoriously slow. Backhauling all traffic for all remote users through the corporate data center just isn’t practical when work from home really starts to scale. Fortunately, VPNs can be configured to operate in more than one way.
Today, most organizations—regardless of size—use some combination of on-premises and public cloud computing. This means that some requests need to go to one or more corporate data centers, while some need to find their way to the Internet.
Traditional VPNs send all requests—both corporate-bound and Internet-bound—through the corporate network because that’s where the corporate information security defenses are located. Today, this approach is causing significant performance problems.
The most popular traditional solution to the problem of VPN performance problems was to just buy a bigger router or firewall. The overhead of the VPN tunnel on throughput isn’t that large, and many traditional corporate applications weren’t latency sensitive. This meant that performance problems usually occurred because the device where the VPNs terminated—the router or firewall—just didn’t have enough processing power to handle the required number of concurrent sessions at the current level of throughput usage.
Times have changed, and technology has advanced. While 2020 has brought a whole new set of VPN problems that are unlikely to evaporate any time soon, it also demonstrated for many organizations that their existing VPN hardware can scale handily.
Raw processing power was rarely the bottleneck, and where it has been, adding an additional device to shoulder the load is a lot easier today than it was five years ago. Similarly, a significant portion of the global population all started working from home at roughly the same time, and the Internet didn’t collapse. While there’s been congestion in some places, most Internet service providers in developed nations report minimal impact on their infrastructure.
In fact, if we’re to believe customer anecdotes, preliminary survey results and even social media, the two biggest issues that businesses encountered in scaling their VPN capacity to meet 2020’s spike in demand were the ability to get licenses for new users, and latency.
… at the speed of light
Latency is the primary source of performance concerns with today’s VPNs, and for good reason: Technology can do a lot of things, but it can’t change the speed of light. Consider for a moment the connectivity between a branch office located 1000km away, and a data center located at the corporate headquarters.
The speed of light in fibre optic cables is roughly 30% slower than it is in free space. Taking some liberties with rounding, it means that a signal takes roughly 490µs to travel 100km, or about 4.9ms every 1000km. Ignoring any additional latency imposed by servers, applications, or intermediary networking equipment, that branch office 1000km away could expect each request they make of corporate IT resources to create a minimum round trip of a little under 10ms.
10ms is an absolutely fantastic latency for the majority of applications. In a traditional environment where employees at the branch location are only accessing corporate resources located in the data center at HQ, 10ms would provide an excellent experience. Unfortunately, this true only for traffic terminating at the HQ data center.
With a traditional VPN setup, traffic destined for the Internet will travel through the corporate data center, then out onto the Internet to its destination, and then have to make the trip back. Not only does the latency incurred by the speed of light start to add up in a hurry, but each time the signal is converted from optical to electrical (or vice versa), and/or processed by a switch, router or information security product, the latency gets worse.
A best-case 10ms latency to HQ is probably more realistically 25ms, and going through HQ to the Internet and back again can start moving past 100ms. 100ms is traditionally considered the threshold where users start to notice performance impacts.
Split-tunnel VPNs are currently the most practical solution to VPN performance problems. With split-tunnel VPNs, only traffic destined for the corporate data center is sent across the corporate HQ network. Internet-bound traffic uses the branch location’s (or work-from-home employee’s) local Internet connection.
This approach cuts a significant amount of latency off of any Internet-related traffic, resulting in (among other benefits) less frustrating video conferencing. Unfortunately, split-tunnel VPNs have a problem. Internet traffic was routed through the corporate network for a reason: that’s where the corporate information security defenses are.
Accomplishing split-tunnel VPNs in a safe and useful manner means solving the security problem. This means either having information security defenses locally (increasingly normal for branch offices), or using an Internet and web gateway (the current fashion for work-from-home connectivity). The modern implementation of an Internet and web gateway is a virtual machine running in a public cloud that serves as an Internet access point for corporate traffic, and possesses the information security resources required to inspect and defend that traffic.
Making use of split-tunnel VPNs, whether to send information directly to the Internet or to route it through the geographically closest Internet and web gateway, requires either getting creative with the configuration of an individual’s VPN client or deploying managed networking devices to branch locations and work-from-home locations.
Managing this at scale can be a problem, and reinforces the importance of having adaptable, centrally managed software-defined networking. NVIDIA® Cumulus NetQ can help. By easing the administrative burden on network administrators, it frees their time to work on more complex, but ultimately far more performant, distributed networking architectures. That’s something most of us would agree is growing in importance every day.