Want a Five-Nines Network? (Part 3)
Network monitoring is essential for detecting failures in a resilient network as well as providing insight into how well the network is running and where it could be improved.
This is the third post in a series about steps that you can take to have a five-nines network--that is, a network with 99.999% availability. Five-nines is generally considered to be the goal of converged networks. It is the metric that was common for the historical voice network.
This blog post describes how to use network and configuration management to increase network availability. Network management is one of my specialties and I've created a Network Management Architecture, which is described at http://www.netcraftsmen.net/resources/blogs/nms-architecture-fcaps-or-itil.html.
How do you know when one component fails in a resilient network? A resilient network will continue to run, perhaps in degraded mode. Network management systems must be used to monitor all parts of a resilient network and must let you know that some part of it has failed so that you can fix it before another component fails, causing an outage.
Having spent time working in financial networks, which have similar requirements, I've seen quite a number of failures occur where the analysis showed that both parts of a redundant configuration failed, often weeks or months apart. The first failure went unnoticed because there was no outage. It is only when the second failure occurred that both the first and second failures were found.
How do you prevent such failures? Network Management! You have to monitor the network to identify failures. The system should generate alerts when a key device or interface fails. You can also set thresholds to create alerts when the utilization of an interface changes substantially, either to near zero or to very high levels. Big changes may mean that the routing or spanning tree protocols changed paths due to a change in the network. If you are aware of the change, then the alert is validation that the network management system is working correctly. If you're not aware of a change that would create the alert, then there is something to investigate.
Another way to monitor the network is to perform active monitoring by using synthetic tests. I sometimes call these tests "application level pings," because they run at the application layer. For example, if sending an email takes longer than usual or fails to complete, then there's either a network problem or an email server problem. Web page retrieval tests perform the same type of monitoring.
In converged networks, there are two important monitoring steps to take. The first is to monitor the endpoints for connection quality. What are the typical stats for delay, jitter, and loss? Are calls terminated abnormally? The stats from real calls are a great way to keep an eye on how the network is performing and to highlight trouble areas. Increasing loss and jitter are early indications of congestion somewhere in the path. The path may have changed due to a failure in the original path and the result is oversubscription on the secondary path. Or perhaps the primary path is now oversubscribed and it is now congested.
The second step for monitoring converged networks is to generate synthetic voice and video traffic. I refer to this as active testing. It is similar to the "application level pings" that I described above. There are at least two methods for generating voice/video synthetic traffic. One is to add probes to key points in the network, such as at each major site, and run tests between the probes. Another is to create synthetic calls to the endpoints, but this requires that the voice and video endpoints support test calls without someone manually initiating them.
When a problem is identified it should be entered into a trouble ticket system to aid in tracking the failures. You can then perform analysis on the most frequent types of failures, allowing you to determine which failures are most common.
Finally, spend time to identify and fix common well-known problems. Duplex mismatch comes to mind as a great example. A lot of people think that duplex mismatch isn't a big problem and that they can let it go. As long as the link is very lightly loaded, they are correct. But high-volume links will have very poor throughput. Other examples are flapping interfaces, unstable routing and spanning tree protocol instances, high-utilization links (more than 50% average utilization or 70% 95th percentile utilization), and interfaces reporting errors or discards.
Taking care of all the small problems makes the network more stable and efficient. You can then focus on bigger problems and you know that two small problems aren't interacting to produce a larger symptom.
Next page: Configuration management