Blue gradient Windows Server background image Blue gradient Windows Server background image
·
·
11 min read

Introducing Windows Server “8” Hyper-V Network Virtualization: Enabling Rapid Migration and Workload Isolation in the Cloud

We’ve all heard about the agility that server virtualization delivers.  However, our conversations with people in the trenches made it clear that the full potential of virtualization remains frustratingly beyond their grasp.  In particular, the lack of agile networking limits the agility you can achieve at a reasonable cost.

Windows Server “8” is the most cloud optimized operating system, providing choice and flexibility in configuring private, hybrid, and public cloud solutions.  Bill Laing, in his blog post, Windows Server “8” Beta Available Now, outlined some of our key investments, including Hyper-V Network Virtualization.  In this blog post, Sandeep Singhal (General Manager of the Windows Networking team) and Ross Ortega (Principal Program Manager from the Windows Networking team) describes some of the issues surrounding cloud adoption and how Hyper-V Network Virtualization in Windows Server “8” addresses these challenges.

We’ve spent the past couple of years talking with customers about why they haven’t yet deployed their workloads to a cloud.  We consistently heard three main issues. First, they want to gradually begin moving individual services to the cloud with a flexible hybrid cloud solution. Second, moving to the cloud is difficult. It’s tedious, time-consuming, manual, and error-prone.  Third, customers express concern about their ability to move to the cloud while preserving isolation from other tenants, be they other business units in the private cloud or competitors in a public cloud.  In the end, whether you’re building your own private clouds or considering using a public cloud provider, you want easy onboarding, flexibility to place your virtual machines anywhere—either inside or outside the cloud, and workload isolation.

Network Agility:  An Unfulfilled Promise
Underlying all of these concerns, customers want the control and flexibility to move their services to the cloud, move them to a different cloud provider, or even move them back to their enterprise datacenter. However, today this is quite labor intensive because cloud hosters require that their customers change the IP addresses of services when those services are moved to a particular cloud environment.  This seems like a minor deployment detail, but it turns out that an IP address is not just some arbitrary number assigned by the networking folks for addressing. The IP address also has real semantic meaning to an enterprise. A multitude of network, security, compliance, and performance policies incorporate and are dependent on the actual IP address of a given service. Moving to the cloud means having to rewrite all these policies. Of course you have to find them all first and then negotiate and coordinate with the different organizations that control those policies.  If you wanted to move to a different cloud provider then that new hoster would assign different IP addresses, requiring yet another policy rewrite. The current situation blocks many customers and scenarios from adopting the cloud.

Customers asked us to have Windows make it appear that their services in the cloud were similar to the services running in their internal datacenters, while adhering to their existing policies and providing isolation from other VMs running in the cloud hosting environment.  When moving to the cloud customers want their data to be as isolated and as safe as if it were running in their own datacenter.

In summary, you demanded the ability to run Any Service on Any Server in Any Cloud.

We took this feedback seriously and designed a new technology called Hyper-V Network Virtualization in Windows Server “8” to provide a scalable, secure multi-tenant solution for those building cloud datacenters and to make it easier for customers to incrementally move their network infrastructure to private, hybrid, or public clouds.  As we will describe later, Hyper-V Network Virtualization builds on existing IETF and IEEE standards, providing interoperability with existing and future network equipment, security appliances, and operational processes.

Hyper-V Network Virtualization:  Applying Server Virtualization to Entire Networks
With traditional server virtualization, each physical host is converted to a virtual machine (VM), which can now run on top of a common physical host.  Each VM has the illusion that it is running on a dedicated piece of hardware, even though all resources—memory, CPU, and hardware peripherals are actually shared.

Network virtualization extends the concept of server virtualization to apply to entire networks.  With network virtualization, each physical network is converted to a virtual network, which can now run on top of a common physical network.  Each virtual network has the illusion that it is running on a dedicated network, even though all resources—IP addresses, switching, and routing—are actually shared.

Hyper-V Network Virtualization allow customers to keep their own internal IP addresses when moving to the cloud while providing isolation from other customers’ VMs – even if those VMs happen to use the exact same IP addresses.  We do this by giving each VM two IP addresses. One IP address, the IP address visible in the VM, is relevant in the context of a given tenant’s virtual subnet. Following the IEEE nomenclature we call this the Customer Address (CA). The other IP address is relevant in the context of the physical network in the cloud datacenter. This is called the Provider Address (PA).  This decoupling of tenant and datacenter IP addresses provides many benefits.

The first benefit is that you can move your VMs to the cloud without modifying the VM’s network configuration and without worrying about what else (or who else) is sitting in that datacenter. Your services will continue to just work. In the video demo referenced at the end of this article we used traceroute, a low-level network diagnostic tool, to show how on-premise services were interacting transparently with services that had been moved to the cloud. We highlighted the fact that once the services moved to the cloud, packets simply were now taking an extra hop to get to the cloud datacenter.  The virtual subnet has become a nearly transparent extension of the enterprise’s datacenter. We also created a secure encrypted tunnel to the virtual subnet. The end result is that different customers with the exact same IP address connected to the same virtual switch are isolated.

Imagine Red VM having IP address 10.1.1.7 and Blue VM having 10.1.1.7 as shown above. In this example the 10.1.1.7 IP addresses are CA IP addresses. By assigning these VMs different PA IP addresses (e.g. Blue PA = 192.168.1.10 and Red PA = 192.168.1.11) there is no routing ambiguity. Via policy we restrict the Red VMs to interact only with other Red VMs and similarly Blue VMs are isolated to the Blue virtual network. The Red VM and the Blue VM, each having a CA of 10.1.1.7, can safely coexist on the same Hyper-V virtual switch and in the same cloud datacenter.

Second, policy enforcement in the end hosts provides a scalable solution for multi-tenant isolation. We do not need to reconfigure the network infrastructure to isolate tenants from each other. Before Hyper-V Network Virtualization, the common solution was to use VLANs for isolation. However, VLANs have scalability limitations, only supporting a limited number of tenants in a shared datacenter.  In addition to having scalability limitations, VLANs are more suited for static network topologies and not the more dynamic environment in which tenants may continually join and leave the cloud datacenter or tenant workloads may continually be migrated across physical servers for load balancing or capacity management purposes.  VLANs require the reconfiguration of production switches every time a VM needs to be brought up on a new server. Typically, the VM deployment team creates a service ticket to the network operations team to reconfigure the appropriate switches with the relevant VLAN tags. By eliminating this step, Hyper-V Network Virtualization increases the overall operational efficiency of running a datacenter.

Third, by allowing you to preserve your IP addresses when moving to the cloud, Hyper-V Network Virtualization also enables cross-subnet live migration. When we talk about live migration, we mean that any client talking to a service is unaware that the VM hosting the service has moved from one physical host to a different physical host. Previously cross-subnet live migration was impossible because, by definition, if you move a VM from one subnet to a different subnet its IP address must change. Changing the IP address causes a service interruption. However, if a VM has two IP addresses, then the IP address relevant in the context of the datacenter (Physical Address) can be changed without needing to change the IP address in the VM (Customer Address). Therefore the client talking to the VM via the CA is unaware that the VM has physically moved to a different subnet.

What’s really exciting is that cross-subnet live migration enables new scenarios. Recall our “Any Service, Any Server, Any Cloud” vision.  VMs can now run and live migrate anywhere in the datacenter without a service interruption. New datacenter efficiencies can be achieved. For instance hosters, during light load periods (such as around 3am) can consolidate any active VMs to a subset of the datacenter and power off other parts of the datacenter—all without having to reconfigure the physical network topology. Administrators no longer need to worry about a VM being trapped in one part of the datacenter because its IP address physically restricts where that IP address is valid.  Similarly VM deployment algorithms are free to assign VMs anywhere in the datacenter because the PA address relevant in the context of the physical datacenter can be changed independently of the CA address which is relevant in the context of the virtual network.

With Hyper-V Network Virtualization the virtual machine is totally unaware that its IP address is being virtualized. From the VM’s perspective, all communication is occurring via the CA IP address. Because the VMs are unaware that they are part of a virtual network, any operating system running within a Hyper-V VM (e.g. Windows Server 2008 R2, Windows Server 2003, Linux,  etc.) can be a member of a virtual network. Hyper-V Network Virtualization is completely transparent to the guest OS.

Two Mechanisms for Virtualizing IP Addresses on a Subnet
Customers can deploy Hyper-V Network Virtualization in their existing datacenters using either IP virtualization mechanism without requiring any hardware upgrades or topology changes. We virtualize the CA IP address by using the PA when sending networking traffic between different end hosts.  We use two different mechanisms to virtualize the IP address:  Generic Routing Encapsulation (GRE) and IP Rewrite.  For most environments, GRE should be used for network virtualization, because it provides the most flexibility and performance.  However, IP Rewrite may be appropriate to provide performance and compatibility in some current high-capacity datacenters.

Within the source and destination Hypervisors, packets are associated with a Virtual Subnet ID.   The Virtual Subnet ID allows the hypervisor to differentiate traffic from different virtual subnets that may share the same CA IP address (e.g., differentiating Red 10.1.1.7 from Blue 10.1.1.7).  Using the Virtual Subnet ID, the Hypervisor can apply additional per-tenant policies, such as access controls.

The first IP virtualization mechanism is Generic Routing Encapsulation (GRE), an established IETF standard.  In this case we encapsulate the VM’s packet (using CA IP addresses) inside another packet (using PA IP addresses).  The header of this new packet also contains a copy of the Virtual Subnet ID.  A key advantage of GRE is that because the Virtual Subnet ID is included in the packet, network equipment can apply per-tenant policies on the packets, enabling efficient traffic metering, traffic shaping, and intrusion detection.  Another key advantage of GRE is that all the VMs residing on a given end host can share the same PA because the Virtual Subnet ID can be used to differentiate the various IP addresses from different virtual subnets.  Sharing the PA has a big impact on scalability. The number of IP and MAC addresses that need to be learned by the network infrastructure can be substantially reduced. For instance, if every end host has an average of 20 VMs then the number of IP and MAC addresses that need to be learned by the networking infrastructure is reduced by a factor of 20.  A current drawback of GRE is that the NIC offloads no longer provide the scalability benefit to the end host because the NIC offloads are operating on the outer header and not the inner header. The offloads can be important for high performance environments where a VM requires 10 gigabit bandwidth.  Similarly entropy for datacenter multi-path routing is reduced because the switches, by hashing fields in only the outer packet, will not differentiate traffic coming from different VMs residing on the same end host.

Never fear!  We have a solution for these limitations.

In Windows Server “8” we’ve made working with standards a high priority. Along with key industry thought leaders (Arista, Broadcom, Dell, Emulex, HP, and Intel) we published an informational draft RFC (NVGRE) discussing the use of GRE, an existing IETF standard, as an encapsulation protocol for network virtualization.  Together with server, switch, and NIC partners we have demonstrated broad ecosystem support for Hyper-V Network Virtualization. Once our partners incorporate NVGRE into their products, hosters will get the scalability benefits of GRE without performance loss.  They will also see opportunities to deploy multi-tenant-aware network equipment, including load balancers, firewalls, storage controllers, network monitoring and analysis tools, and other security and performance products.

GRE is the preferred network virtualization approach for most current and future datacenters.  However, some current datacenters may need greater scalability than can be achieved with current generation hardware.  For these environments, Windows Server “8” supports a second IP virtualization mechanism, IP Rewrite.

With IP Rewrite, we rewrite the source and destination CA IP addresses in the packet with the appropriate PA addresses as packets leave the end host. Similarly, when virtual subnet packets enter the end host the PA IP addresses are rewritten with appropriate CA addresses. A key advantage of IP Rewrite is that the packet format is not changed. Existing network hardware offload technologies such as Large Send Offload (LSO) and Virtual Machine Queue (VMQ) work as expected.  These offloads provide significant benefit for network intensive scenarios in a 10 Gigabit Ethernet environment.  In addition, IP Rewrite is fully compatible with existing network equipment, which does not see any new traffic types or formats.  Of course, the Virtual Subnet ID is not transmitted on the network, so that existing network equipment cannot perform per-tenant packet processing.

An Incremental Approach to Creating Hybrid Clouds
With Hyper-V Network Virtualization we’ve made it easy to move your subnets to the cloud. However, once in the cloud the next thing you need is for your virtual subnets to interact with each other. For example, the typical 3-tier architecture is composed of a front end tier, business logic tier, and a database tier. You need a way for these virtual subnets (tiers in this example) to communicate as if they were all located in your own datacenter. Hyper-V Network Virtualization allows you to route between your virtual subnets. That is, not only can you bring your virtual subnet to the cloud, you can also bring your entire network topology to the cloud.

Windows Server “8” also provides a Cloud Cross-Premise connectivity solution that can securely connect your datacenter or private cloud with a public cloud to create a hybrid cloud.  Combining Hyper-V Network Virtualization with Cloud Cross-Premise Connectivity means we have made the cloud a seamless extension of your datacenter.

Internally at Microsoft, we use Hyper-V Network Virtualization in a private cloud deployment using GRE as the IP virtualization mechanism. Here the tenants are the various product groups in the Server and Tools Business unit (STB). We wanted to consolidate our datacenter infrastructure to realize the operational and resource efficiencies of the cloud as well as providing our product groups the necessary flexibility they required when deploying their services in a cloud environment.

Conclusion
We’re excited about Hyper-V Network Virtualization because it benefits customers moving to private, hybrid, and public clouds; provides new efficiencies for hosters and administrators running cloud datacenters; and presents new opportunities for our ecosystem partners.  Hyper-V Network Virtualization—combined with other technologies such as Storage Live Migration, simultaneous Live Migration, and Failover Replication—enables complete VM mobility with Windows Server “8.”

To learn more about Hyper-V Network Virtualization watch the demo we gave at //BUILD/. Our demo starts at 13 minutes and 52 seconds.  Our //BUILD talk:  Building secure, scalable multi-tenant clouds using Hyper-V Network Virtualization provides more technical details.  For deployment information, we encourage you to visit our Technet site.