No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

A Deep Dive: Virtualization is More than Meets the Eye

sleepyfellow Alamy Stock Photo.jpg

Image: sleepyfellow - Alamy Stock Photo
OK, since nothing truly meets the eye in virtualization, my headline may seem unnecessary, but there’s a subtle truth behind it. We see “virtualization” (the creation of a simulated computing environment instead of a physical version) almost exclusively in the context of cloud computing. And it may be the other aspects of virtualization that will make it the most critical piece of technology in our toolbox for the next five years or more.
 
In technology, a ’virtual something’ is a feature that appears to be a single device or a collection. But, in fact, it’s just a representation of a class of devices that can be mapped dynamically to resources. Virtual machines, container services, public cloud computing, and virtual private networks (VPNs) are examples of the principle of virtualization we see and use daily.
 
Deeper down, though, virtualization combines a.) abstraction and b.) realization. Your 'virtual something' starts as an abstract representation of a thing that exposes all of reality's properties to applications. But instead, it’s a software artifact, a collection of APIs. When you deploy something in the virtual world, it realizes the abstraction is “realized.” The future value of virtualization is that the abstraction doesn’t have to represent anything but the potential to be realized. Yes, we can have a “virtual machine” that looks like a server, but we could also define an abstraction that doesn’t represent any single thing we have.
 
You may think this is both crazy and useless but think for a moment about why we use virtualization in clouds and networks in the first place. The resources we host services or applications on could include many different real devices in many different combinations and topologies. To write an application that had to be explicitly and directly mapped to all those combinations would be (cripplingly) difficult and expensive. So we create an abstraction that our network users and applications “see” and map it to resources using standardized software tools. Virtualization, first and foremost, is about simplification and generalization.
 
Networks are complicated, right? We use “virtual networking” to make a VPN service look like a single device to a user. The network properties visible at the user/network interface are exposed, while all other complexity hides beneath the abstraction. Why not exploit this further?
 
Management is hellishly complex, but does it have to be? If we were to create abstractions representing the features of services—including how to manage them externally—we could use that process to simplify the management of networks in two ways. First, we could compose a simple “user-level” management abstraction. For example, you bought a service with a service level agreement (SLA)—here’s the SLA status. Second, we could manage each service feature within its realized resource commitments and report only failures to the higher user level.
 
How much time do enterprises spend deciding whether a given set of network status parameters add up to an SLA violation? How much time do service providers spend figuring out if a butterfly wing in some network backwater is creating a wind that disrupts core transport resources a thousand miles away? Breaking a network down by features could make all this easier.
 
The industry has recognized this sort of thing for some time. Intent modeling is the process of creating an abstraction of a feature that exposes only the stuff you need to do with it, i.e., the “intent.” If the “connectivity” feature intends to provide IP connections, it only needs to show the IP interface and a status indicator to let the user know if it is working or not. You could then create a “service” by combining the abstractions of its features. That service would be a consumer of the feature-intent-models and create its own user status indicator from the combined status of all the subordinate features.
 
Suppose a feature breaks down? First, the generalized software used to realize the feature's abstraction or intent model would work under the covers to repair the network, with no higher-level intervention. Since these feature models could be little functional atoms, the remediation process would be far less complicated than it would be if we were trying to repair an interdependent collection of devices scattered over a continent. If internal repair failed, the feature model would report a fault, and the service model would then tear down the realization of that feature and build another.
 
Management isn’t the only use of the deeper mechanisms of virtualization. Let’s look at the Internet of things (IoT) and an application like traffic control. We could deploy sensors to tell vehicles what traffic was proximate to them, and it would help avoid collisions. We see that today. We could also take the data of all the sensors in a city and abstract them to a single virtual feature we could call “routing.” A car, for example, instead of looking at perhaps a thousand sensors trying to pick the best route, could ask the routing feature instead.
 
Virtualization is a solution to the complexities of modern technology. It can turn a bewildering collection of technical elements into something that can offer a simple yes/no, go-here-or-there, sort of response. It can direct application innovation to where it should be—creating ways to add value to lives and jobs rather than in unscrambling webs of devices.
 
We have only scratched the surface of what virtualization can do for us. When we realize its full potential, there’s a darn good chance it will lead us through the most revolutionary changes in technology for years to come.