This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Making Real Networks From Virtual Things
Talk about virtualization is so pervasive these days that you have to wonder if anything is real anymore. Network architects in particular must be asking this question, because at the root of every network strategy is the reality that you have to sell services and carry traffic.
To justify virtualization of routing, switching and other functions, we not only have to support those two goals but do so in a way that's better than the real boxes of today could. The questions are "Can that be done?" and "How?" and the path to answers may start by recognizing that no single technology shift will do it all.
The basic idea of virtualization is to create the behavior or features of a physical device by taking a software-based abstraction of those features and hosting it on a pool of resources. The principles are the same as those already being adopted in cloud computing -- a virtual server is created by hosting a virtual machine on a real resource pool. The key point here is that the abstraction of network functionality can take two basic forms -- a single virtual element can provide all the features of a real device, or a system of interworking elements can combine to create those features. We can see examples of both these approaches developing today.
Virtual switches (when used in the WAN not the data center) and virtual routers (offered now by most network vendors) are essentially switch/router software designed to run in a server instead of a custom hardware platform. This shift takes advantage of the low cost of x86 servers that results from their wide use, but sacrifices custom hardware features that can accelerate packet handling or improve availability. As a result, they are best used where there isn't a lot of traffic and where multiple users aren't sharing the devices and increasing the consequences of failure.
Putting the 'Private' in VPN
Aggregating traffic is a fundamental principle of economical networking -- higher utilization of trunks means lower cost per bit carried. If we make virtual routers and switches user- or service-specific, then we have to do the aggregation somewhere else, meaning lower on the OSI stack. Dedicating optical connections to every user or service is hardly cost-effective, so some electrical grooming is needed. That's where software-defined networking (SDN) could come in.
SDN can build tunnels efficiently by simply chaining forwarding rules across multiple devices. These can then share optical trunks to increase transport efficiency. If we use low-cost white-box OpenFlow switching, we could build this electrical grooming layer cheaply compared with Ethernet/IP networks. Our virtual switches and routers could then combine with user- and service-specific tunnels to create agile networks.
That agility angle is the big benefit to be gained. If we envision our tunnel-SDN network as a kind of universal connection fabric, we could see cloud data centers as a hosting fabric connected to it. We could then use this hosting fabric to place virtual switching/routing features at the places where traffic patterns make them most useful in controlling network cost and performance. If something broke, the tunnel fabric and hosting fabric could combine to reposition connections around the failure.
One potentially enormous benefit this could create is a truly "private" form of VPN. If tunnels keep user traffic separated from that of other users and the Internet, and if every VPN has its own dedicated virtual router instances, there's far less chance that one VPN (or the Internet) could in any way interact with another. You could even separate applications within a company, keeping users and traffic not authorized completely out of the network of applications that demand a lot of security. It could also create different networks for content, different mobile subnetworks for mobile virtual network operators, and increase cloud security by separating cloud traffic from other traffic. If the user had a virtual on ramp hosted on client systems or at the service edge, you could extend this separation of traffic all the way to the user.
The SDN tunnel approach generates a "how low can you go?" question, meaning integrating this with agile optics. The ideal situation would have the electrical grooming and optical underlayment working together to provide a seamless "tunnel layer." The Open Networking Foundation at least has assumed this means controlling everything with OpenFlow, and has pushed optical extensions to that protocol. That seems to me to be a waste of time; we don't need a single protocol here, just cooperative control. With the two-layer model I've described, the tunnel layer only really has to react to major changes in traffic or failures.
It's also fair to ask how this impacts SDN at the top, meaning whether SDN's ability to replace switching/routing is important given the increased use of virtual routers/switches. I think SDN's best mission is in grooming, but there may be an advantage to using it to set up the virtual routers and switches. It's just a bit too early to say how much of a benefit that would bring.
Getting enough experience with this model to answer questions like the value of SDN at the IP level could be tricky. There's a symbiosis between SDN and virtual routers, between optical networks and SDN, that plays differently than we've been expecting. Only a few vendors have a broad product footprint in this new structure, and without a chance to make money everywhere, vendors may hold back. Whether that holds back the new virtual model, or just the slow-moving vendors, is a question for 2015.
Follow Tom Nolle on Google+!