No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Video and Voice: Is Your Network Ready?

As a consultant at Chesapeake Netcraftsmen, I get to see a lot of interesting network problems and get access to some interesting tools that help us diagnose those problems. We often get called to perform network assessments, which typically need to be performed quickly and accurately. Using manual methods does not work well in a network assessment, due to the requirement for fast, accurate network discovery and assessment. Automated tools allow us to quickly collect important information about the configuration and operation of the network. We're then left with the task of analyzing the data that is collected by the tools, and reporting our findings to the customer.

Voice and video assessments are particularly interesting because they are often performed prior to the deployment of a voice or video system. However, if an enterprise has skipped the pre-deployment assessment, the job becomes even more critical when the assessment finally does get performed, post-deployment--because the voice or video system is already in place and is not performing acceptably.

It is important to identify single-instance problems as well as systemic problems. A single-instance problem might be a congested interface, while a systemic problem might be caused by an incorrect QoS configuration that is applied across the network.

Our data network assessments often see problems similar to those experienced in voice and video deployments. Two assignments that I've done in the past year have resulted in the discovery of significant data network problems caused by excessive buffering in routers. I wrote about one of them in a series of blog entries that are referenced in my post, Defending the Network from Applications.

When producing reports for customers, making them easily understood is important, and the key factor in achieving this is your choice of tool. There are multiple tools on the market; we use PathView from Appneta.

How It Works
PathView uses a set of small probes to generate network test packets. I like to call this approach Active Path Testing because test packets are injected into the network and the performance of these packets is monitored to determine various network characteristics. (See my blog on Network Management Architecture for the full suite of network management functions.) Using a sequence of test packets allows the tool to measure and report on the path bandwidth, path utilization, latency, jitter, and packet loss. All these factors are critical to voice and video, as well as many other applications. The figure below shows an example output during some tests that I ran for a customer.

As seen in the blue-tinted section of the graphic, the path capacity matches the theoretical maximum for a 10-Mbps path. The utilization of the path is near zero except for the time of the tests, during which I generated 9.1 Mbps of traffic. You can easily see the correlation between the utilization and the changes in latency and packet loss during the tests, as shown in the lower section of the graphic. When you're working within the tool's interface, placing the cursor on one of the graphs shows the corresponding data points on the other graphs.

In this screenshot, I'm focused on the peak data loss point to see when it occurred (around the 22:03 mark on the horizontal axis of all the graphs). The latency at this point is 240 ms, which is significantly above the 20-ms baseline latency. Jitter is also high at 90 ms and the measured path utilization is 7.6 Mbps. This is near the end of the traffic test, and all the metrics quickly ramp back down to their normal levels when the test traffic stops. The red sections of the Jitter and Latency charts show the alarm thresholds that were set prior to running the tests.


(Click here for larger view)

Next Page: A Typical Deployment

Two types of test packets can be used. The first is ICMP, which can be sent to any destination that will reply to ICMP. The second packet type is UDP, sent to another PathView appliance or PathView software running on a workstation.

The UDP packets can be formatted as voice or video and tagged with DSCP marking to get the network to handle them like voice or video. The network can then be tested as if it were handling voice and/or video. The subsequent reports show the measured network characteristics during the test, making it easy to spot parts of the network where potential problems lurk.

One of the things that I like best about this testing is that it uses very little bandwidth. Continuous path analysis only takes about 2 kbps, and an in-depth path analysis might use up to 200 kbps. Both figures are pretty small, relative to today's network speeds.

The other advantage of the packet stream approach to path testing is that many tests can be run from a single server and that lower-powered servers are usable. This certainly beats using ttcp, iperf/jperf, or netcat for path testing, all of which generate intrusive network traffic.

The probes send their test results to an analysis server, (in our case, typically, the Appneta PathView Cloud service on the Internet, though an enterprise appliance may also be used). It is the analysis server that summarizes the results and produces the nice graphical views of the tests. Alerts can be defined when any of the metrics exceeds thresholds that can be set.

A Typical Deployment
The deployment model depends on what type of network assessment is being performed. For voice and video, I like to deploy multiple probes, placing one in each key location, such as in a call center, in a data center where a media control unit lives, at the connection to the external telco provider, and at other key business locations. It might take 10 probes to create enough test points to get a good picture of how well the network is supporting voice and video.

The probes would be configured to generate UDP packets to emulate voice and video between themselves. Additional tests using ICMP may be configured to provide a complete picture of how the network handles data. A data assessment deployment may use fewer probes, located in data centers, using ICMP test packets to various client systems.

The tests are configured and left to run for a day to collect data. A preliminary review can be performed after one day, but it is best to let it run for a week to get a full weekly capture. Then the analysis is reviewed to identify any problems. Paths with high utilization, data loss, or jitter are investigated to determine the cause. We look for indications of link errors, congestion at an egress interface, or an improper QoS implementation. With multiple tests, we can also determine if the problems we are seeing are system-wide or are specific to one part of the network. A deep understanding of networking is needed to correctly analyze the data.

It Works!
In an engagement for one customer, we found something that might seem curious at first glance: The path occupancy figures were more accurate at higher utilization levels than at low utilization levels. However, this wasn't so surprising once we thought about it. It is difficult to accurately measure the differences in a lightly-occupied path (one that is 5% and 10% occupied) by using packet streams. The fact that it is accurate at high utilization levels means it is easy to identify real problems, and it doesn't create false alarms at low utilization levels.

And identifying real or potential problems is our goal: Showing customers what is happening in a way that everyone understands.