The service provider community is pretty excited about Teleconferencing. Not because it provides a great immersive remote conferencing experience. Not because it reduces travel and increases productivity. Not because it is hot, or green, or in the news. They like it because it generates a terrific amount of bandwidth that has to be carried at the highest levels of quality. That means increased revenue. But do they really know how to carry this traffic so high quality video is delivered?I have talked to a couple of service providers recently who don't seem to have a handle on the technical requirements. Lots of bandwidth. OK, that one is easy enough to understand. Telepresence needs about 5 Mbps per screen. So a 3-screen system needs 15 Mbps and a 4-screen system needs 20 Mbps. And if you push up to 1080 resolution the bandwidth is even higher.
The real issues are in guaranteeing low packet loss and low jitter. And I mean low. Most of the vendors require about 0.1% or less packet loss. Cisco asks for less. Jitter needs to be below about 40 ms. Cisco asks for less. These are tough specifications to meet when we are talking about cross country or international connections. But it has to be done to make this stuff work well. When an IT engineer asks, do we really need to meet those specs, I point out that a Telepresence system will graphically display network packet loss on 60 inch plasma screens to their top executives. Yes, it really needs to be done right.
Here is the problem I have discovered. Video conferencing traffic should be allocated to its own CoS class, so there is no interference from other applications-data applications in particular. The bursty nature of data applications can cause momentary peak utilization, which causes loss and jitter on the real-time (voice and video) packet streams.
This dedicated CoS class should NOT use Weighted Random Early Discard (WRED). WRED is a fabulous traffic management algorithm that takes advantage of TCP behavior by tossing out a few packets randomly when the traffic streams start to get near the maximum bandwidth allocated for the class. TCP streams that experience slight loss back down their bandwidth a bit, which balances the load across the TCP streams and keeps everybody working well.
But guess what? Voice and video are carried over UDP streams, not TCP streams. UDP streams that experience random discard just deliver poor quality results. If the endpoints have an FEC (forward error correction) algorithm built-in like the Polycom units, they will increase their packet rate and decrease their video rate, causing a slightly higher demand on the network. Not really the result for which WRED was designed.
For video conferencing, the CoS class should allow the video to use the bandwidth right up to the allocated limit before dropping packets. The bandwidth management problem does not belong to the router, it belongs to the application. The video conferencing gatekeeper should limit the bandwidth used to stay within the available CoS bandwidth limit at all times.
So my message to the carriers is don't mix your traffic types, and let the video run right to the limit. Yes, once the limit is reached you can drop packets, but not a second before. This approach will deliver consistent high quality video conferencing and telepresence results.