This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
OpenFlow: Does It Matter?
Everything that’s "news" isn't relevant, as we all know, and when you have something that promises to change (yes, again) the way we move information through networks, it's hard not to look back on past adventures like "IP switching" and sigh. But the latest development in network traffic-handling, OpenFlow, might actually have promise, and even support.
In traditional Ethernet switches and routers, packet forwarding is handled via tables that are built up through address discovery. The process has worked for decades, and it's proved to be scalable and resilient, but not particularly secure, or facile in providing QoS. What OpenFlow does is to make connectivity explicit rather than adaptive and permissive. A switch/router has a forwarding table as before, but entries are put in the table by a controller software application rather than being created by discovery. That means nothing moves unless "mother" says "Yes, you may!" It facilitates traffic management to ensure QoS and obviously it's as secure as your controller software connection policies demand.
This seems like a good idea, but it would have to be productized to be of much value. Fortunately, interest in the concept from the government, scientific, and university community has induced nearly all switch/router vendors to support the architecture. There are also some startups (one of them, Big Switch, made an OpenFlow announcement recently), and networking giant Cisco is rumored to be incubating a startup in the space, to be re-absorbed by Cisco if OpenFlow is a hit.
But how practical is this idea, even with products? It depends.
Don't expect to see an "OpenFlow Internet". You can see that having to explicitly define the flow rules on a per switch basis is just not going to scale to Internet levels. I don't think it's going to have much of a chance beating out MPLS for Internet traffic engineering either. Where OpenFlow could be almost a slam dunk is in the data center, and where it could be literally huge is the cloud.
Making Ethernet work in ever-expanding data centers has required new standards, and there's also a growing interest in "flattening" data center architectures. Large datacenters currently require a hierarchical switching architecture to serve the plethora of devices in the datacenter; this means that the length of a connection path will vary depending on how deep you need to go into the stack to get from the input port to the output port. This in turn generates performance differences that are hard to predict; furthermore, every hop is also a place where you could have a packet drop or delay. Reducing these layers of switching and variations in hop counts between servers and storage should make QoS more predictable. OpenFlow accomplishes this by making every flow orderly not only in its primary route, but also in any number of failure modes.
In addition, OpenFlow could make flows secure because they'd be permitted only where they're authorized. You could extend that order and security between data centers, which is what makes the notion so interesting as the basis for a private cloud. Perhaps most interesting of all, you could probably do this with your current switches if your vendor (like most, as I've said) supports OpenFlow. All you need is an OpenFlow controller, a software application. There are open-source tools there, as well as commercial versions from some players (like Big Switch).
All this beauty and potential doesn't hide the fact that there are still some issues. One obvious area where more work is likely needed is in creating the software and policies that drive the controller. While OpenFlow is sometimes characterized as "software-defined networking," meaning "application-defined," the fact is that most applications would have no idea whatsoever what route traffic should take, and some would have no idea what user connections should be supported until the user signs on, at which point the application is already connected. There's a layer of abstraction needed here.
The cloud might be the place to find it. Cloud computing includes an implicit function we could call "provisioning", which assigns an application to resources and then ensures that the application's components and the application's users can address each other. There are a few projects in the cloud world today that are looking to make that implicit function into an explicit standard, defining a "container" for an application that can be linked to its connectivity requirements as easily as its server and storage requirements. These connectivity requirements could be "pushed down" then to an OpenFlow controller and then to the switches.
My view is that the value of OpenFlow is really completely linked to the cloud, because the cloud is our best abstraction for a future services-driven network. You can visualize content delivery as a service of the cloud; a content delivery network or CDN from several of the key service providers can in fact be cloud-hosted. You can visualize UC/UCC and cloud computing as cloud applications too, obviously. Because the cloud can be anything, OpenFlow's support for the cloud means it could be a part of everything. But it probably won't be "all" of much of anything, for two reasons--simple scalability and half-hearted vendor support.
OpenFlow, if successful in the cloud, will create a new model of infrastructure where a vast service cloud exists, but it's just inside the network's edge. We connect to it for most of the valuable and useful stuff, from content to applications. To avoid the obvious problem of scaling a network that requires an explicit rule set for every flow, we'd use traditional Ethernet IP (the Internet) to access our OpenFlow cloud. For the users of the network, there'd be no visible difference.
From the vendor perspective, the question is whether today's blowing of insincere kisses at OpenFlow would evolve to real product support. OpenFlow hardware could and should be different, and cheaper. It's great to be compatible with current switches/routers to get OpenFlow off the ground, but to keep it aloft there will have to be real benefits in cost as well as in features. The fate of Cisco's rumored startup may be key here; if Cisco jumps or even seems to jump on OpenFlow, it will be hard for competitors not to do the same, and that may get the ball rolling so fast that nothing will stop it.