No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

How to Simplify Branch Networks with NFV

Many enterprise organizations have a problem: the proliferation of networking hardware at branch sites. A typical branch site might have a mix of devices to provide routing, telephony service, security, wireless LAN control, WAN acceleration, and Layer 2 switching. Traditionally, each function requires its own box (oftentimes a simple, general-purpose compute platform that runs software implementing a function), resulting in high cost and long deployment and upgrade times.

Upgrading and configuring numerous devices at each branch consumes a lot of network staff time, and potentially requires the shipment of new hardware to each branch. Also problematic is getting new or upgraded hardware properly connected to the existing hardware and transitioning from the old to the new. Software upgrades are typically an all-or-nothing switch because the platforms often can only run one instance of the software at a time.

Can we change the process to make maintaining network functionality at a branch site easier?

The answer is, "Yes!" Turning these devices into virtual machine (VM) implementations is becoming commonplace. The obvious advantage is reduced cost and ease of maintenance. That's just the start of the advantages, as we'll see below.

Cisco's Approach
As James Sandgathe, technical marketing engineer at Cisco, described last month during a Tech Field Day presentation at Interop 2016, the best approach is to minimize the amount of function-specific hardware needed at a branch. Deploying the desired functions as VM instances eliminates the need for multiple hardware platforms. A virtual Layer 2 switch (VSwitch) implementation allows interconnection among the VMs, just as if the function was enabled in hardware.

IT gains a lot of flexibility with a VM-based implementation. In the DevOps world, where development is tightly coupled with operations, new instances of updated applications are brought up in parallel with old instances. IT would direct some application traffic to the new application instances to validate operation. If the application update is good, and works as desired, then IT would direct all traffic to the new instance and retire the old application instance. But if problems surface, then IT would redirect application traffic to the old instance and the application developers would correct the problems before retrying the deployment.

A software-only VM deployment of network functions facilitates a DevOps-style deployment model. The enterprise deploys a new chain of network functions on a hypervisor platform, using Open vSwitch to interconnect the virtual components. In the figure below, a physical network connection, eth0 (yellow), connects to ISRv (a virtual router) in the right-hand blue circle. Traffic routes to the vWAAS (virtual WAN) on the left, then via the ASAv (virtual firewall) in the bottom center to the branch inside network (red).

Virtual network with NFV

What happens when IT needs to update the branch or creates a new design that incorporates another function? It can create a parallel network configuration with the new components, using an ISRv instance to do policy-based routing of test traffic over the new, parallel configuration while the bulk of production traffic traverses the original virtual network. Testing can easily proceed, resulting in production traffic eventually using the new virtual network. When IT has migrated all production traffic to the new virtual network design, it can remove the original virtual network from the system.

Throughout this scenario no one had to visit the branch. With network functions virtualization (NFV), IT is able to use virtual instances of all the components except for the hardware that provides the hypervisor host and physical network connectivity. All the software gets sent over the WAN to the virtualization host.

But there's also a chicken-and-egg problem. We need a reliable way to establish communications with the platform used to provide network connectivity and to run the VMs. How does this communications continue to function should a problem with the VMs or their virtual interconnections arise?

Cisco's approach is to use a Cisco 4000 Series Integrated Services Router (ISR) with an internal United Computing System (UCS) E-Series blade server as the hardware platform for the VM hypervisor. The 4000 ISR already contains the functionality for remote deployment and remote access, and hosts a variety of interface cards for the desired network connectivity.

Through Enterprise Service Automation, Cisco supports automation provisioning of numerous virtual instances that run on the UCS E server. The virtual instances currently supported include a virtual router (ISRv), virtual firewall (ASAv), virtual WAN (vWAAS), virtual wireless LAN controller (vWLC), and a wide range of unified communications services like UC Manager and Unity Express. The hypervisor is based on Kernel-based Virtual Machine (KVM), so any compatible VM instance from third parties is supported. This might be some incentive for vendors that don't yet support the KVM hypervisor to update their offerings.

Use Cases
I was already aware of a financial firm's use case. This company has hundreds of branch sites and simply doing a security update takes it three months and one million dollars (that's right, $1 million). Use of Enterprise Service Automation, along with maintaining fewer hardware devices, will result in significant savings. Additional savings will come from the use of smaller hardware cabinets, reduced power and cooling, and being able to change the functionality without sending someone to each branch.

Cisco's Sandgathe provided two other interesting use cases. The first showcases the provisioning of services to multiple airlines within a single airport's physical infrastructure. It turns out that airports don't have a lot of space for things like wiring closets. So reducing the space to that of a single 4000 ISR is a big advantage. On top of that, the networking staff can configure multiple virtual instances to provide service to each airline in an area. As airlines buy and sell gate rights, the topology must change. The soft nature of the virtual instances within the 4000 ISR makes adapting to tenant changes much easier.

The second use case is for oil drilling rigs. This use case is obvious once mentioned. Updating network infrastructure on a drilling rig, especially when located offshore, takes a lot of effort and expense. Making most of the virtual elements greatly simplifies the task of updates and changes to the infrastructure. Adding a deep packet inspection utility to perform quality-of-service classification and marking becomes a much simpler task of adding the necessary VM instance and changing the connectivity to the desired packet flow. Once the 4000 ISR is in place, IT can implement new functionality over the network.

The UC Connection
What does this have to do with UC&C? This virtual network approach makes the implementation of a number of UC&C functions at remote branches easy to do. There's no hardware lock-in. The flexibility in available functions and the reduced cost for deployment and maintenance are compelling. This is a very exciting application of NFV that many enterprise organizations can use.