No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The End of "Throwing Bandwidth"?

I want to take off on a point that John Bartlett made in his most recent blog post on Scalable Video Coding (SVC):

...if video were to grow to be half or more of the bandwidth carried on the enterprise network, can QoS really work? QoS is a priority mechanism. When everyone is standing in the first-class line, no one is getting priority.

So I think there will be a big advantage to running desktop video at a lower class of service or even at best effort. The high priority classes will be saved for voice and for room-based video (telepresence or otherwise), and maybe for the CEO's executive desktop system. But for the rest of us, we will have to get by with a lossy network.

So I think there will be a big advantage to running desktop video at a lower class of service or even at best effort. The high priority classes will be saved for voice and for room-based video (telepresence or otherwise), and maybe for the CEO's executive desktop system. But for the rest of us, we will have to get by with a lossy network.

On one level, this is the age-old "bandwidth-vs.-QOS" debate, i.e., "throwing bandwidth at the problem." But actually, it isn't.

When we used to talk about "throwing bandwidth at the problem," we really were talking about a world where some kind of 80/20 rule was pretty much assumed: 80% of the bandwidth (or pick your big number) would be consumed by non-QOS-based apps. The paradigm would be your web pages, loading static content or, at worst, streaming video which is a near-real-time app: Annoying if it's got too much delay, but tolerable if there's just a bit of a lag.

But as John points out, you can't throw bandwidth at the problem when the problem is that everything wants top priority for using 80% of the bandwidth. That's not sustainable.

So what John's saying in this post isn't just that you'll want to use the Internet to control your costs if your users start adopting video at their desktops. It's that your mission-critical video won't work the way it's supposed to in that scenario.

I suppose the other option would be to create a priority class for non-essential video (though how you determine and mark that, I'm not sure). But at that point, if you gave that non-essential traffic a low priority on the internal network, would you really get better performance than you could get on the Internet?

Someone tell me: