I’ve been thinking about unidentified flying objects. The cool thing is that until one lands in your backyard, you’re free to assign any characteristics you like to them. The bad thing is that once you start doing that, asking how many little green men can fit in a disc, or how fast one rotates, you’re committed by implication to a single vision of the UFO unknown. So it is with virtualization and networks. It’s easy to find things to virtualize, but when you find one, you’ve implicitly picked an architecture, and it might be wrong.
We have three approaches to network virtualization today -- hosted router instances, white boxes, and overlay networks. We have no shortage of products, both open and proprietary, in any of these spaces, and perhaps it’s all the noise that competition within a strategy has created that’s prevented us from looking at the differences among our three approaches. There are some good, and new, reasons why we should start paying more attention to those differences now.
Both hosted router software and white-box switches have a common foundation, which is that routing is good but routers are expensive. What’s needed, according to advocates of either of these approaches, is an alternative to routers in building traditional IP networks. The key term here is “traditional IP networks,” because the result of a hosted-router or white-box substitution would still look from the outside like the same sort of router network you’d build with Cisco routers or those of other router competitors. It’s a network of largely static boxes.
You can save a decent chunk of change using some sort of hosted/white-box strategy; proponents claim between 40% and 60% reductions in CapEx. The problem is that you don’t save on operations complexity or cost because you’re still operating a router network. In fact, you may actually end up with higher costs because the underlying platform (COTS or white box) could be less reliable and/or have more operating parameters to tweak to keep things running.
Just as both enterprises and service providers are wrestling with virtual routers, cloud users and providers have been wrestling with virtual networks. When public cloud first came along, it was clear that a cloud data center might support tens of thousands of different customers, more than traditional network technologies could keep separate from each other. Nicira, eventually sold to VMware to become NSX, created an overlay protocol to solve the problem. “Overlay” means that the protocol rides on top of another protocol like Ethernet, and provides complete connectivity control. This flavor of software-defined network, or SDN, is becoming dominant in cloud applications.
You can apply the same principle in the wide-area network, which is what happened with software-defined WAN, or SD-WAN. SD-WAN creates an overlay network on top of IP, usually the Internet, to build what looks like a virtual private network (VPN) to users, but is a lot cheaper to get and more widely available than the MPLS VPNs usually sold to enterprises. What many love about SD-WAN is that all the networks it builds are separate -- ships in the night. In some implementations, you can even control who can talk with what, offering an extra layer of security.
Now the same cloud that launched SDN and arguably SD-WAN is raising a broader question, which is whether we need to have router networks at all. If application and enterprise connectivity can be created with a network overlay, and if you can overlay such a network on top of Ethernet or the Internet or pretty much anything else, why not dumb down the “underlay” network and let the overlay run things? That’s what the MEF
was talking about with its Third Network
, which of course was based on Ethernet. A few optical vendors, like Ciena, have hinted at using optical pipes and electrical tunnels in operator networks and using a customer- and even application-specific overlay for connectivity.
The big benefit of the per-customer or per-application scenario is that the traffic on these virtual networks is limited, so there’s less need for specialized hardware. You could host your overlay network nodes on COTS or white-box switches with lower capacity and cost. You could create a pool of hardware to host on, and use cloud technology to spin up network topologies to match traffic patterns by placing your nodes where they were needed. Operators would provide underlay networks to each site, and the overlay networks would link users and applications into virtual communities.
All this seems logical. Operator services could be less costly, using cheaper devices because the connection and security features would be higher up. Cloud services need virtual networks anyway, so enterprises wouldn’t face additional costs for the overlay features. Everything could be made elastic and resilient, even more secure. What more could you ask?
How about comprehension? The problem with this utopian approach to building simpler networks is that we have literally decades of box-centric thinking to overcome. We’ve accepted, sort of, the notion of overlay networks but SD-WAN vendors tell me that they still have to overcome the notion that the extra layer of network functionality is somehow “overhead.”
Accepted, but not sought. If virtual networks are more than virtual routers, we have to start thinking about networking in a different way, a way that doesn’t define fixed nodes or routes, where traffic patterns can be composed like user interfaces and security is explicitly a part of the network service. That’s not going to be an easy transformation for vendors, operators, or users. But then our decades-long quest for the truth about UFOs hasn’t been easy either.