No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Bandwidth Management

Knowing the demand then leads to determining if there is sufficient bandwidth in the network, and then upgrading links as needed to support that demand. Sounds easy, but usually there is a financial constraint in the mix that gets in the way, especially if part of your deployment is video and the bandwidth numbers are large.

This leads us to the issue of bandwidth management. As in all engineering tasks a tradeoff has to be made. So the designer decides that we will allocate bandwidth for the 90th percentile of usage, or decides we can't exceed a T1 link for certain offices or makes the decision based on some other criteria. So the network is built with sufficient bandwidth for most situations, but not for all.

If this were a data application we were discussing, we would not worry, because we know that in those situations where the demand for bandwidth rises above the supply, TCP will balance out the usage of the clients, slow everyone down a bit, and continue to work well. I call this "graceful degredation". Unfortunately real-time streams don't work this way. Because real-time streams (voice and video) are carried over UDP instead of TCP, they don't do graceful degredation. Instead what happens when the demand exceeds the supply, is that all streams in the affected traffic class start to drop packets. This means all streams start to lose quality together. Ugh.

So we need bandwidth management, which means that we need a way of managing how many users are trying to use the bandwidth at any given time.

This is not a new concept, we have always had this in the PSTN environment. We get feedback when there is insufficient bandwidth for our call; we know it as the 'trunk busy' signal. The same concept applies to voice and video over IP.

IP-PBXs have this function built-in. The IP-PBX is programmed to know about the network topology and is then given parameters for how many voice calls can be placed between location A and location B simultaneously. When this limit is reached, the IP-PBX will either give the next caller a busy signal, or use an alternate path (such as the PSTN) for routing that call.

A similar function exists for video conferencing calls implemented in the gatekeeper. As with the IP-PBX, the gatekeeper is given a high level knowledge of the network topology and then programmed to only allow a certain amount of bandwidth on those links. Note that in this case the gatekeeper has to track bandwidth, because video calls can be placed at many different bandwidth levels, whereas VoIP calls are all at the same bandwidth determined by the codec selected for the enterprise deployment. So the video gatekeeper adds up bandwidth as each new call is placed. When the bandwidth limit is reached, the next call is declined.

You may already see some problems arising with this simple answer, and I'll tackle some of those in my next post. The answers are not all in yet on bandwidth management, especially as we expand into the rapidly increasing demand provided by high definition (HD) video conferencing.