No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Virtual Data Center Put in Perspective

OK, this may be a pun, but it's true. Getting your arms around this virtual stuff is hard.

That's particularly true when the drivers for virtualization are quite diverse. One common element in all of the virtualization concepts is the notion of an abstraction, something that lets users work with something easy and disguises the hard stuff. From there, you're on your own. The good news is that you can go pretty far with this simple start, particularly if you apply the abstraction notion first to the data center.

Along Came the VMs

Virtualization got introduced to IT and the data center when "virtual machines," or VMs, came along. Even this single trend had multiple drivers. Hardware was growing in power, with CPUs adding cores and servers adding CPUs. The result was a steady growth in compute power, more and more of which was wasted by traditional multi-programming operating systems.

Part of the reason for the waste was that many applications needed to be isolated from other apps, either to secure stable performance or to resolve security/governance concerns. VMs are an abstraction of hardware, representing something that looks like a server and can be managed like a server, but that resides with other VMs on the same physical device. Isolated, protected, abstracted.

Containers, which are simple lightweight virtual elements that also share a server, have been exploding in popularity because they use fewer server resources than VMs and thus allow more applications to be packed into a given data center. Every data center, even those for mid-sized businesses, likely will become totally virtualized using one or both of these technologies.

Virtual Networks, in Two Flavors

And that's created the second dimension of data center abstraction. Sharing a network is like sharing a server in many ways. You have the risk that application traffic crosstalk will create performance issues. You have the risk of security breaches and the failure of compliance audits. Almost from the first of the VM deployments, people recognized that you needed virtual networks to connect the VMs or containers.

Virtual networks have their own two main categories, though the two don't relate particularly to the server-virtualization models of VMs or containers. The first category is the integrated virtual network model, where the actual network devices (the switches that create the data center LANs) create and sustain the virtualization. We know this approach as the VLAN or, more recently, the extended-address-space VLAN, called VXLAN. VLANs and VXLANs are "native" to Ethernet, but they require all the switching devices in the data center to support them, and they're limited by traditional Ethernet LAN features.

Another data center virtual network model comes from Nicira, later bought by VMware. The Nicira approach is to build an overlay virtual network (OVN), created by adding tunnels on top of the network protocols already in place. The advantages of this approach are considerable, to the point where overlay-modeled virtualization is sweeping the market.

The biggest benefit of the OVN model is the fact that it doesn't matter what lower-level switching technology or vendor is in place, or what mixture. If you can carry traffic between devices you can carry OVNs to the VMs or containers on those devices. A second benefit is that you can create and manage OVNs without giving applications or users access to features of the real network devices, access that could allow users to change the network behavior for other users or applications. Finally, you can do things like prioritize traffic or add encryption within an OVN without involving anyone else; it's truly your network.

Operations Automation

Modern data center technology tends to unite server and network virtualization. Cloud stacks like OpenStack and container tools like Docker provide a limited form of virtual networking to steer traffic among containers or VMs. Both VM and container systems are often augmented using DevOps tools (Chef, Puppet, or Ansible, for example, for VMs, and Kubernetes for Docker) that have more virtual network-support features.

DevOps tools, available in some form for more than a decade, are enjoying a wave of popularity because of data center virtualization. Operations automation in a virtual data center is essential because virtualization explodes the management problem. If we assumed a single server could host a half-dozen VMs or about two dozen containers, you can see that the difficulty in running and connecting hosts is multiplied by an order of magnitude or more.

Eventually virtualization and its cousin, cloud computing, will change applications themselves. You can already see this in Twitter's use of functional programming or Uber's microservice-and-event design. Cloud providers like Amazon, Microsoft, and Google are mainstreaming the serverless event-driven model, and that's going to percolate through developers to make virtualization's capabilities a requirement for future applications. This change will be slow to mature, given the enormous amount of inertia in software, but it's going to happen.

All this happy data center change has to be connected to users, customers, and consumers to be useful, of course. Data center virtualization typically creates a "subnet" model, a segment of a corporate network. That corporate network is a WAN, and how virtualization impacts that WAN is the subject of our next blog.

Follow Tom Nolle on Google+!
Tom Nolle on Google+