When you aren’t the size of Netflix, you may not be guaranteed dedicated infrastructure within a data center; you have to share. Even in larger organizations, multitenancy may be required to solve regulatory compliance issues. So what is multitenancy, how does it differ from other forms of resource division, and what role do networks play?

Gartner Inc. defines multitenancy as “a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated.” This is basically a fancy way of saying “cutting up IT infrastructure so that more than one user/department/organization/and so on can share the same physical IT infrastructure, without being able to see one another’s data.”

That “without being able to see one another’s data” is the critical bit. Allowing multiple users to use a single computer has been possible for decades. Multi-user operating systems, for example, can allow multiple users to log in to a single computer at the same time. While this approach does allow multiple users to share a physical piece of IT infrastructure, it isn’t multitenancy.

In a multi-user OS, the multiple users logged in to the system are all using the same OS. The only thing that prevents one user from seeing another user’s data are the security controls of that OS. Barring additional isolation software, typically from a third party, users will be able to see what other users are doing on the system, and may be capable of launching any number of attacks against other users, or their data.

Isolation Is Key

Multitenancy differs from multi-user concepts in that it incorporates the idea of isolation. A good example are two virtual machines (VMs) running on a single host: The OS running inside a VM can’t see into other VMs on the same host. Multitenancy, however, also incorporates the idea of reproducing entire environments, notably including management capabilities, for each tenant. This adds another dimension of consideration.

A single virtualization host running two VMs could be a multi-tenant environment if the environment being reproduced is entirely contained within that VM. Let’s say, for example, that your company rents dedicated web server VMs, where each VM has a complete management suite. The user’s entire interaction with their environment is accomplished with a web-based management application, SSH, and the website that it ultimately serves. In this example, user data is separated because each VM is self-contained—the data doesn’t leave the VM, and users can’t break out of their VM to go rummage around in someone else’s.

Public cloud providers have shown the world what’s possible, so in the real world, even when talking about multitenancy within a single organization, IT infrastructure has to be considerably more advanced for anyone to start using the word “multitenancy” seriously.

Today, multitenancy requires not only that networking, storage, and compute resources be securely divisible, but that individual tenants have a way to create, edit, and destroy both workloads and data on their own. Self-service is technically a separate concept from multitenancy, but pragmatically, they’re deeply intertwined.

Consider, for example, the science department of a research university. Here, multiple individual research projects may need to be strictly segregated from each other, especially if government funding is involved. Separate physical infrastructure for each project would be expensive and inefficient, so switching, storage, and maybe even hosts might be shared, despite there being requirements for strict segregation of both data and access.

Automation, Orchestration, Action!

Data centers become more difficult to manage securely when either scale or complexity increases—you can only throw so many humans at the management problem before they start getting in one another’s way. As a result, if you want to manage a complex data center at scale, you need automation.

Multitenancy not only increases complexity, but it almost always opens the door to a rapid scaling of the IT resources in question. Thanks in part to a greater awareness of the risks of data theft, along with the long-term consequences of taking a lackadaisical approach to privacy, there aren’t a lot of people willing to skimp on security or privacy anymore. This reality is why automation is absolutely vital to deploying practical multitenancy.

Automation is only the first step. One thousand different automated systems that have to work in concert to get things done might be less chaotic than 1,000 different manually operated systems that have to work in concert to get things done; but 1,000 different automation systems trying to do anything in a coordinated fashion is still a complete madhouse.

Orchestration is simply the automation of those automated systems. It’s the reason you can log into a cloud provider, fill out a wizard, and with the push of a button have a pre-canned set of VMs, load balancers, security features, and virtual networks instantiated, configured, and made publicly available.

To use an analogy, automation is like graduating from making your flour with a mortar and pestle to an electric grinder. Orchestration is an industrial bakery.

Multitenancy in today’s data centers combines self-service, orchestration, and automation with logical infrastructure isolation technologies like computer, storage, and networking virtualization. The result is the ability for multiple organizations to share the same physical infrastructure securely—but making this happen isn’t easy.

The Advantages of ‘Open’

Multitenancy requires the orchestration of multiple different types of IT infrastructure. In any given data center there can be multiple different vendors providing products for networking, compute, and storage, in addition to vendors for management, self-service capabilities for tenants, security, and more. Orchestrating all of these pieces is most easily accomplished if each product uses both open protocols and open standards.

This is especially true of networks, which are the backbone tying all the other technologies involved together. Today, networks involve multiple vendors. In addition to physical networking, there are virtual switches in both hypervisors and microvisors. Each hosted and public cloud provider has its own networking to consider. Organizations will also have to put effort into securing the connections between data centers, and both the public cloud and hosted services that they use.

Automation is critical for making the day-to-day of modern IT viable. Open standards and open protocols, on the other hand, are important for keeping modern IT viable.

Modern data centers no longer do bulk forklift upgrades. The “refresh cycle” is a myth, one displaced by the cold reality of perpetual organic growth. There is a constant churn in the data center, a response not only to the need to continually scale, but the need to constantly change.

Data centers are no longer homogenous, single-vendor islands. Advancing data centers to be able to deliver the multitenancy that today’s users expect means creating data centers that are not only complicated and constantly evolving, but also capable of coping with multiple similar products, from multiple vendors.

For products and vendors to be able to enter—and eventually leave—the data center without causing operational disruption, these products must all be able to communicate in a standardized fashion. This is where open standards and open protocols come in. It’s also where Cumulus Linux comes in.

Cumulus Linux is based upon open standards, open protocols, and open source. Cumulus Linux can be fully automated and orchestrated, and plays well with others. If you’re looking to evolve your network toward the kind of multitenancy that public clouds have taught us all to expect, then Cumulus Linux is what you’ll need to make your network simple enough to manage … and keep it that way.