No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Virtualize Your Voice/UC Infrastructure

So, you've been hearing about virtualization, virtual infrastructure, virtual desktop, private clouds, public clouds, hybrid clouds, bring-your-own-device (not to mention your lunch!). So what does that have to do with Unified Communications and Collaboration and why should you care? Well...lots...and you better pay attention! Welcome to a series dedicated to virtualizing Voice/UC--why it's important, why you should care, what you need to know about it and what you need to do about it.

Some History: How Did We Get Here?
Over the last decade, our industry has gone through a significant--at times mind-bending--transformation. It's important to realize where we came from, how Voice/UC and how Data Center infrastructure have evolved, and what that now implies in terms of the art of the possible…and I'll park the whole area of Mobility to the side for the moment!

The old "telecoms" was an environment with dedicated purpose-built hardware, dedicated circuit-switched voice networks, and dedicated highly specialized teams of people to manage all of this for any given business. Along came Voice-over-IP (VoIP)--now pretty much a given for a lot of us--which enabled us to converge voice onto shared IP networks, with software technology that guaranteed the necessary Quality-of-Service on that shared pipe.

That transition started a ripple effect--traditional telecoms-centric infrastructure and telecoms-centric departments collapsing into data-centric "IT" networks--the infrastructure--and data-centric "IT" departments – the people. In the early days of this evolution, vendor-proprietary IP-PBX/UC hardware was still in play--primarily to handle the media encoding/decoding and traditional TDM Service Provider interconnect. Along came more powerful industry standard x86 servers on one hand and SIP interconnect on the other. As x86 servers evolved following Moore's Law, media encoding/decoding was viable as a software implementation, also referred to as host-media processing. At the same time, over the last few years, SIP has become ever more pervasive, and with some vendors offering virtualized border gateways the need for dedicated hardware gateways to interconnect with Service Providers has started to go away. All of this leads to the development and delivery of 100% software-centric Voice/UC solutions and the fundamental consideration that Voice/UC is just another business application running in the corporate data center--albeit with some particularly stringent real-time processing characteristics that would require them, for the time being, to be relegated to dedicated x86 servers. In most cases, if you wanted a full suite of Voice/UC from any one of the major vendors in this space, this still required several dedicated servers.

In parallel with the communications evolution (or revolution), the traditional IT data center was undergoing its own revolution--grappling with an explosion of business software applications and the dedicated servers required to run them all. At the best of times, many of these business applications only required a fraction of the compute power that the dedicated server offered, and utilization levels on those servers would only go down further as compute power grew with every generation. In addition, each application had its own management paradigm, backup processes, availability/resiliency solution, etc. All this cost money and resources to acquire and maintain.

With virtualization, a concept that originated from the old mainframe days and was re-imagined on x86 server technology, hypervisor software provides an abstraction layer that enables a single physical x86 server to appear as if it were several servers. The business applications run in independent, wholly-contained virtual machines set up by the hypervisor software. Each virtual machine thinks it has access to dedicated server resources--a dedicated CPU, dedicated memory, dedicated hard drive, etc.

The basic benefits of this technology are fairly apparent in that one physical server could run two, three, and even several business applications. A data center could consolidate and reduce the number of physical servers (or hosts) to run the needed applications (or workloads). This, in turn, reduces the physical footprint required to host the data center, the amount of power required to run it and to cool it. This is quite compelling. Even more compelling are the management tools that are made available in conjunction with the virtual infrastructure that enable consolidated and consistent manageability of all those virtualized applications. A common virtual machine management framework typically includes backup/recovery, automated policies for further optimizing server utilization--turning them up and down when required; moving virtual machines from one server to another to optimize costs and ensure adequate resources are available to apps at their peak use time; providing solutions to restart applications should servers or virtual machines fail; guaranteeing lock-step fault tolerance; and even protecting an entire data center by enabling a managed recovery at an alternate data center (in other words, business continuity following defined corporate policies and procedures).

In the early days, virtualization wasn't for the faint of heart and started off with consolidating non-critical, non real-time applications. As corporate IT got more comfortable with the technology, and solutions from various virtualization vendors matured along with more capable underlying hypervisor-optimized hardware from leading server manufacturers, more and larger workloads became virtualized, and more business-critical application workloads could be virtualized. Today, well over half of all applications in business data centers run on virtual infrastructure, and this number continues to grow as this technology becomes more capable of running applications--such as real-time voice--that were previously considered too sensitive to run in such an environment. Furthermore, virtualization infrastructure has become more affordable, easier to use, and more readily accessible, enabling smaller businesses that would otherwise not consider it to benefit from these advances.

Voice, Meet Virtualization Leading communications vendors now offer pure software-based solutions that are completely independent of underlying hardware. Leading virtualization solution vendors offer state-of-the-art hypervisors that enable virtualization of real-time-sensitive workloads and powerful tools to manage virtualized data centers in a consistent fashion. Server technology from major hardware vendors has matured to include embedded hypervisor technologies. The industry has been able to virtualize not only peripheral Voice/UC applications, but also real-time-sensitive call control and media-processing applications: Core telephony, audio/video conferencing, voice messaging, etc.

So why is this important? Because now you can consolidate Voice/UC as an integral part of your data center fabric--saving you capital expense and, arguably more importantly, operational expense by being able to manage your voice/UC consistently in conjunction with all your other business applications on shared infrastructure, with common best practices in Business Continuity.

On our next posting, we'll delve into more detail as to what it takes to virtualize voice, what you need to be aware of, and why voice quality can be confidently delivered. Over time, we'll continue to expand on other aspects of virtual voice, including virtual desktop and why virtualization is important to cloud UC enablement.