No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Network Redundancy or Resilience?

I've been doing a session about network resilience in John Bartlett's QoS workshop at Enterprise Connect for the past several years. The network is the transport for voice and video communications and has become critical infrastructure for business operations. John's session discusses QoS, where and when it is needed (everywhere and all the time), and the factors that come into play when considering the deployment of QoS in the network. One of the factors is network resilience and its interaction with network redundancy.

It is useful to review the definitions of the terms 'resilience' and 'redundancy', so I did some research on their definitions.

* re-dun-dan-ce (r-dndn-s) n.
6. Electronics Duplication or repetition of elements in electronic equipment to provide alternative functional channels in case of failure.

* re-sil-ience (r-zlyns) n.
1. The ability to recover quickly from illness, change, or misfortune; buoyancy.

A redundant system includes multiple channels to provide alternate paths for communications in case of individual failures. Providing one redundant channel can allow the network to continue to function when a single failure occurs, provided that there are no devices or links in common that form a single point of failure.

Resilience, on the other hand, refers to a system's ability to adapt to failures and to resume normal operations when the failure has been resolved.

A network will typically incorporate redundant components and links in order to construct a resilient network. But just because a network contains redundant components, that doesn't mean that it is resilient, from the standpoint of business operations.

Maximum Use of Parallel Paths
I've occasionally encountered business managers who want to optimize the Return On Investment (ROI) of all parts of the network. When the network is designed with multiple paths, these managers want to make maximum use of all paths. They tend to not think about the case of operation when a failure occurs, which puts the business at risk.

Let's say that we have two sites that are connected by two parallel paths of equal bandwidth and latency (i.e. equal cost), as shown in Figure 1 below. For purposes of discussion, let's say that the two links are each running at 100Mbps.

Figure 1. Two sites, parallel paths, equal cost

The business manager would like for both paths to carry significant traffic, so that the business earns more money from the use of both links. Let's say that each path is running at 60Mbps on average, with peaks to 95Mbps (a practical maximum). When a failure occurs, the one remaining path would need to carry an average of 120Mbps with peaks of 190Mbps in order to operate without impacting the business applications. But the remaining link is limited to about 95Mbps peak throughput. In the failure scenario, an average of about 25Mbps of traffic can't be handled and certainly none of the peaks are handled.

The single link will be significantly oversubscribed, causing TCP-based applications to run much slower than expected, due to the extra packet loss incurred (see TCP Performance and the Mathis Equation). Assuming that some of the traffic between the sites is Real-Time Protocol packets (UDP encapsulation) carrying voice or video, TCP will experience even more congestion. Do you design the voice/video QoS to handle the normal bandwidth case, or design it to handle the case when a failure occurs? It depends on the importance of the different QoS traffic types when creating the design.

In some cases, the business can operate with lower application performance for a short period of time, provided that the failure is quickly repaired. In other cases, the business must monitor the links and make sure that each link is running at less than 47Mbps (95Mbps total) so that application performance is not degraded when a failure occurs. (The reason for less than 100% is to allow for overhead on the link, such as routing protocol packets, CDP/LLDP packets, inter-frame gaps, etc.) Financial firms typically fall into this category. Retail companies who rely on business applications for revenue generation should also fall into this category.

But the non-technical manager sometimes doesn't understand the need for the reserve capacity and puts the company at risk. The network is redundant, but not resilient, because the level of operation is not sufficient for acceptable business operation when a failure occurs.

Too Much Redundancy
Some organizations go overboard with redundancy. Figure 2 below shows a network design that was a hub and spoke design, but with what I'll call "wheel" connections between the spoke sites. The hub was where the main business data center was located.

Figure 2. Hub, spoke, and wheel with insufficient bandwidth

Each spoke link was designed to handle the traffic from that remote site. However, when a failure occurs on one spoke link, the traffic from that site was sent to a neighboring site to be forwarded to the hub site. During busy parts of the day, a link could become overloaded and traffic would be re-routed to a neighbor, which would subsequently become overloaded. The network traffic was oscillating as the load varied at each spoke site.

This was not a very resilient network. They had asked for a network assessment because of application performance problems. The network management systems had not detected this problem, but once it was discovered, they could see the patterns in the historical performance data.

There is another problem with this design: the routing protocol has to process many alternate paths to determine the lowest cost path. More router memory and CPU are consumed by the routing protocol on all routers, unless special configurations are implemented to limit the number of redundant paths that are considered during the routing convergence calculation.

Finally, there are no equal-cost parallel paths. When a failure occurs, a routing recalculation must occur to find an alternate path. This increases the routing protocol convergence time, which impacts the time to route around failures as well as the time to recover from failures, resulting in poor resilience. You would run the risk that a network failure could cause an outage long enough that it results in dropped voice or video sessions.

It is better to design a network with two links to each remote site, and then monitor these links for problems. Supporting two links to each spoke site is more expensive than one link to each such site, so that must be taken into account. An alternative solution would be to eliminate the "wheel" link between every-other site (i.e., halving the number of such "wheel" links in the overall network), then sizing the spoke links to handle the load from a pair of sites. There still isn't an equal-cost multipath from a spoke to the hub and back, but it does reduce the number of paths that the routing protocol must handle.

Designing a resilient network is more than just adding redundancy. It is critical to understand the business needs, and then incorporate the level of redundancy that is required to create a resilient network. This is a case where too much of a good thing is bad.