What is dynamic routing? Why is Routing Information Protocol (RIP) horrible, and Open Shortest Path First (OSPF) ever so slightly less horrible? How does Linux handle OSPF, and what advantages does it bring over traditional networking gear in complex, intent-based, infrastructure-as-code environments?
RIP and OSPF are Interior Gateway Protocols (IGPs). IGPs are protocols designed to allow network routers and switches within an organization’s internal network to dynamically reconfigure the network to respond to changes. These changes may include the addition or removal of network equipment or network links between network devices.
The purpose of IGPs is to tell networking equipment which devices live where. While devices that are part of the same subnet can find one another, they require a router to communicate with devices on other subnets. Routers and switches keep routing tables of which devices are on which physical interface, and VLAN. These routing tables allow each device to know where to send a packet to reach a given system, and whether or not that packet needs to be encapsulated or tagged.
IGPs allow routers and switches to exchange some or all of their routing tables so that other devices within the network fabric know where to send packets that are bound for a specific device. The dissemination of routing table updates throughout a network is called “convergence.” The time it takes for network changes to converge increases in importance with the size of the network, and with the adoption of modern dynamic application development practices.
Routing Interface Protocol
RIP has been around for some time. It was already in widespread use before the standard was formalized in 1988. Its successor, RIPv2, was developed in 1994, and the standard was finalized in 1998.
When multiple possible paths between source and destination exist, RIP uses hop count as the metric to determine which network link should be used to send a packet. Hop count is the number of devices between a network device and its destination. RIP is limited to 15 hops, severely limiting the size of the network on which it can operate. RIPv2 retains RIPv1’s 15 hop limit. The big difference is that where RIP uses unicast, RIPv2 uses multicast.
With RIP, convergence is slow. Network routing table dissemination takes longer than alternative protocols, in part because RIP sends the entire routing table with each update. For all of the reasons listed earlier, RIP is an absolutely terrible routing protocol for the modern era, and should never be used on anything but the smallest networks.
RIP’s flaws have been known for some time. Cisco created the proprietary Interior Gateway Routing Protocol (IGRP) to replace RIP, which overcame many of RIP’s deficiencies. The proprietary nature of the protocol, however, limited its adoption.
Enhanced Interior Gateway Routing Protocol (EIGRP) is another Cisco proprietary protocols that aims to effectively replace RIP, RIP v2, and IGRP. Like RIP, IGRP and EIGRP are distance-vector routing protocols that use “distance” (i.e., hop count) to determine path; as a result, RIP, IGRP and EIGRP are often considered to be part of the same family of routing protocol.
EIGRP was made into an open standard in 2013. While adoption beyond Cisco’s sphere of influence is occurring, it’s been slow going, and OSPF is the open routing protocol that still dominates the data center.
OSPF is a link-state routing protocol. This means that, instead of counting how many hops exist between sender and receiver, OSPF focuses on the total bandwidth available. As a result, OSPF has no hop count limit, and can handle much larger networks than RIP. OSPF convergence is fast, in part because OSPF sends only small updates, instead of the entire routing table.
The first version of OSPF was standardized in 1989, and OSPFv2 was standardized in 1998. OSPF networks can be subdivided into “areas,” which leads to most OSPF networks looking like a series of interconnected star topologies. OSPFv3 can be thought of as “OSPFv2 for IPv6,” and was standardized in 2008.
Intermediate System to Intermediate System (IS-IS) operates somewhat similarly to OSPF, in that it uses link state instead of hops as its metric. The first edition of the standard was published in 1992, and the second edition was published in 2002. It’s been widely implemented by service providers, but hasn’t seen wide use in smaller data centers.
Though IS-IS is generally the more capable of the two, there are some significant differences. Without delving too deep into the technical differences, OSPF networks more easily create star topology networks, whereas IS-IS more easily creates hierarchical backbone-style networks. This is one reason why IS-IS is very popular with service providers, while OSPF remains the de facto standard in the data center.
Another reason OSPF still dominates is because OSPF is a layer 3 protocol, while IS-IS is a layer 2 protocol. This means that OSPF information is exchanged using data packets that can be routed, while IS-IS is not.
Data packets from layer 3 protocols can traverse routers. TCP/IP is the most famous example of a Layer 3 protocol. It’s ability to be routed allows the communication between computers, even though they must transit multiple routers to cross the internet.
Layer 2 protocols, however, can only connect devices that do not need to transit a router. In the case of OSPF and IS-IS, OSPF allows routers to exchange information directly with routers located more than one hop away in the routing fabric, while IS-IS would only allow routers to exchange information with immediate neighbours.
The short answer for why it’s useful to use network devices based on Linux when using either RIP or OSPF is that both routing protocols are ancient. Even OSPF v3, despite being relatively new, is a little long in the tooth at more than a decade old.
These protocols were designed for an era in which a “dynamic network” was one in which a cable failed, or an individual router was rebooted. None of these protocols, not even Cisco’s EIGRP, are really designed for a world in which DevOps teams can spin up tens of thousands of workloads – complete with virtual switches and virtual routers – with a single script, and then destroy them all just as easily.
Traditional networking equipment doesn’t allow for a lot of customization. With Linux, on the other hand, you can essentially do anything you can think of. If you wanted to create a series of filters so that only some routing information is propagated, it’s far easier in a Linux-based networking environment.
Similarly, Linux networking can pre-seed routing tables during the spin-up process of those thousands of DevOps-driven workloads, or send routing information based on any number of triggers that one might dream up.
Stretch Your Limits
In short: today’s networks are far more dynamic than was envisioned when these protocols were made. Traditional networking equipment, being rigidly standards-based, doesn’t allow for bending – let alone breaking – the rules. Linux does. While information exchange between network devices still has to use the standardized protocols, Linux allows administrators far greater flexibility in how, why and when those protocols are used.
This is the nature of the IT industry. We all wait for standards bodies to make the next great standard, and then wait even longer for those new protocols and standards to be supported by all devices (factoring in the time it takes equipment to age out of a data center).
In the meantime and between time, we stretch and bend the existing protocols to their limit. This is what Linux is great at, and why Linux should be at the core of your network.