No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Verifying Resilience

There have been several excellent posts recently on No Jitter about network resilience and network testing. Gary Audin describes "How to Approach Resilience Planning," Darc Rasmussen talks about using testing to "Make This a Happy Holiday Season," and Mike Burke tells us "How Not to Repeat History of Failed Testing."

Really? It happened to Macy's... and over Black Friday, too. As Fortune senior writer Phil Wahba wrote, Macy's website went down on the second biggest shopping day of the year due to overflow shopping traffic.

Each of the above mentioned articles describes a slightly different perspective on resilience and testing. Underlying the different stories is a common theme: Good planning needs good testing in order to validate the implementation and the assumptions that went into the design and configuration.

That brings me to the question: Do you conduct failure testing and analysis of your network and UC infrastructure? Or is your organization afraid of touching the network for fear that it will break? Organizations that don't do regular testing are working from a position of hope, as in, "We hope that nothing breaks because it might not fail over to our backup systems." That's a precarious position to be in.

Many organizations already have redundant infrastructure -- dual WAN carriers, redundant core routers and switches, uninterruptible power supplies, backup data paths, and redundant IT services systems. However, I keep encountering organizations that have never run a planned test of their redundant infrastructure. Why wait for an emergency to learn that something doesn't work? It is much better to use planned downtime in which you can perform controlled tests.

It is a good idea to evaluate the failover process. Does the failover work the way you think it should? Is it fast enough for the applications? Does it self-heal when the failed device comes back online?

Disaster recovery may force a backup site to become the primary site for an extended period of time. Will the infrastructure and staff be able to handle the movement of the IT services that would be forced by a disaster at the former primary site? Think about all the companies that were affected by Hurricane Sandy, flooding in the Midwest, fires in the South, or earthquakes in the West. Many inadequately prepared companies simply cease to exist when their IT operations can't quickly return to functional health.

You may find that there is something unexpected that is outside the IT infrastructure that creates a problem. A good example of external factors was a facility that had two emergency generators, one large and one small. A major power failure caused the generators to start, but the smaller one soon failed. Unfortunately, the ingress cooling air vents were controlled by power from the smaller generator. When the smaller generator failed, the vents closed, causing the main generator to overheat and shut down. No one had thought to test the generator redundancy.

The server environment in most organizations has already become very dynamic, with VMs, containers, and application mobility. Dynamic networks are next. The network will be changing as the workloads increase or decrease and as the workloads move between hardware platforms within the data center. Expect to see application migration between data centers or to add burst capacity at a cloud provider.

Network dynamics will make static testing plans less useful. Sure, there will still be parts of the network that are static, such as ISP connections and perhaps some of the major interconnect links within an organization. But the applications will become more mobile and change size as the customer loads change. Subnets will move around. If a whole rack loses power, can the IT infrastructure move the workloads to another set of servers and reconfigure the network within an acceptable timeframe? Does the application gracefully handle and recover from the loss of some of the infrastructure?

Dynamic testing is needed in IT infrastructures in which applications can move around. Some simple tests need to be run to validate that the new application instance is configured and running properly before moving workloads onto it. This may result in building something that I call an "application-level ping." It is a request that is processed like a real client request but only results in validation that the application is functioning correctly. A simple example is sending a test email to an email server. The test verifies that the email is received by a test account within a specified time. Similar tests are available for credit card processing.

Developing good test plans is challenging, and requires that you understand the IT systems and its interdependent components. To develop good test plans, you often need someone with a different personality who looks at systems differently, so you may need to find a consultant to lead the development process.

Another approach is to start with small parts of the IT infrastructure and expand as testing experience is gained. Static parts of the infrastructure will be easy to test, such as ISP links or failover to a backup UC controller. Don't forget to test the small services that the infrastructure may need to run smoothly, such as the internal DNS servers. I'm always surprised and disappointed to discover both primary and secondary DNS, NTP, and DHCP servers on the same subnet and the same power feeds. Kill the power on the switch to which these servers connect and see how well the IT systems continue to function.

When creating tests, look for things that have a high probability of occurrence, such as an ISP link failure or power failure. Don't overlook device problems like power supply failures or fans that stop running and cause overheating and shutdown. This latter set of problems affects a single device, which is easy to test.

There is another advantage to having regular testing schedules. It allows you to do upgrades on your infrastructure. If the network is configured with A and B redundant halves, can one half of the redundant infrastructure be taken down (offline) for service and upgrades? How easy is it to move traffic onto the upgraded half so that the second half can be upgraded?

Of course, you should include the UC infrastructure in the test plans. Fortunately, it is one of the easier components to test. Do phones properly failover to the secondary when the primary is turned off or disconnected from the network? Does the dial plan still work? Are there any functions that are dependent on the primary UC controller and must be migrated to the secondary controller if the primary is destroyed (think fire or flood)?

Automation makes the testing easier and faster. You must eliminate manual testing from the process or it won't get done as often as it needs to be. However, there will be some tests where there is simply no substitute for a pair of hands, like pulling the power plug on a core router. Just make sure that the automation system verifies that the redundant router is good before pulling the plug.

Learn more about systems management and network design trends and technologies at Enterprise Connect 2017, March 27 to 30, in Orlando, Fla. View the Systems Management & Network Design track, and register now using the code NOJITTER to receive $300 off an Entire Event pass or a free Expo Plus pass.