Clouds are on everybody's mind and tongue these days, and we, the analytical crowd, are starting to get tired of the fuzziness and fogginess around definitions, especially when "cloud" and "communication" start appearing in the same sentence.
As with many other marketing creations, "the cloud" is like an onion: you peel one layer, expose another one beneath it, and keep going until you find the core. A look beneath the cloud label reveals its major characteristic: Elasticity of computing resources.
Peeling off that layer allows us to see the management tools that are required to automate resource allocation on demand and support elasticity. Peeling off one more layer exposes the core of the cloud: virtualization, or more specifically x86 server virtualization.
The traditional server architecture includes hardware (generally based on x86 hardware architecture), operating system (Windows, Linux, Mac, etc.) and applications running on top of the OS. Server virtualization hides server resources, including the number and identity of individual physical servers, processors, and operating systems, from the application. Special software called a hypervisor divides one physical server into multiple isolated virtual environments--that is, virtual machines (VMs).
The virtual machine typically emulates a physical computing environment, but requests for CPU, memory, hard disk, network and other hardware resources are managed by a virtualization layer which translates these requests to the underlying physical hardware.
So why would we introduce that additional complexity to the already complex data center? The primary reason is that virtualization allows for server consolidation and operational efficiency, thus delivering huge cost savings to data center operators. As a rule of thumb, virtualization reduces the number of physical servers in the data center by a factor of 10, although some projects result in even higher gains.
Less server hardware means not only less CapEx (buy 10 servers instead of 100) but also less OpEx (each server requires extra space, power, cooling, maintenance, and administration). I calculated that the virtualization of just 130 servers can lead to savings in excess of $1M in the first year. The details are in my latest white paper "Virtualizing Video Infrastructure" that can be downloaded here.
In terms of operational efficiencies, virtual machines are easier to manage (start, stop, replicate, etc.) than physical servers. VMs can easily be moved from one data center to another, and new instances of an application can be created much faster. Virtualization can also dramatically simplify the logistics for global businesses: it eliminates the need to certify and import/export hardware.
Finally, there is the whole issue of redundancy and survivability that cannot be solved efficiently with appliances because the enterprise has to keep additional appliances on stand-by (read: extra licenses, extra space, extra cooling, extra administration) to take over functionality if the primary appliance fails. Redundancy and survivability are inherently supported in virtualization platforms and allow enterprises to implement failover scenarios across their geographically distributed virtualized data centers or, if the enterprise has only one data center, fail over to service provider data centers. Many more benefits and scenarios are discussed in the white paper mentioned above.
The most popular virtualization platform today is VMware's vSphere, followed by Microsoft Hyper-V and Citrix Xen. Enterprises have embraced virtualization, and have moved non-real-time applications such as web servers, email, and CRM from dedicated servers to virtual machines running in data center(s). If the data center(s) belong to the enterprise, people often talk about "private could"; if the data center(s) belong to a service provider, it is a "public cloud". By some estimates, around 70% of enterprises have deployed virtualization, and more than 50% of the applications in these enterprises have already been virtualized.
Real-time applications, such as voice and video, require more deterministic behavior and do not tolerate latency, jitter, and throughput limitations. In addition, conferencing servers and media gateways require media processing in real time--often implemented in proprietary DSP-based hardware. Converting these applications to software running on x86 processors and then virtualizing them is a difficult task. No matter the technology challenges however, the strong trend in the communication industry from hardware-based solutions to software running in virtual environments is challenging all vendors to innovate and virtualize their infrastructure solutions.
Voice and UC systems are moving faster towards virtualization because of lighter performance requirements (CPU and memory) for processing audio signals. The VMware solution exchange lists the applications tested and certified to run on VMware vSphere. Mitel has certified its entire portfolio of infrastructure products, while Avaya and ShoreTel have certified their communication manager and contact center solutions. Even some smaller players such as Zultys are working on virtualizing their communication platforms.
Video applications have higher performance requirements than voice and are technically harder to virtualize. Vidyo is one exception because of its video routing architecture that eliminates transcoding in multipoint calls. This allows Vidyo to provide higher scalability at lower cost in virtualized environments.
Other video conferencing vendors such as Polycom and LifeSize are porting their transcoding MCUs from DSP-based architecture to virtual machines; an engineering feat in itself that requires rewriting code and leads to sacrifices in the area of scalability and quality. Several new players--among them Vidtel and BlueJeans Networks--have developed video conferencing services that run in virtualized environments. Although these services transcode the video signal on multipoint calls and gateway calls, they are designed from scratch for virtual environments and scale very well.
In conclusion, virtualization has secured its place in the enterprise, and many CIOs mandate that all new applications must be virtualized. Voice and video communication systems are often the only applications that have not been virtualized yet, and the pressure is on all vendors in our industry to test and certify their complete infrastructure portfolio in virtualized environments, no matter whether they will be deployed in private or public clouds.