This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
How Not to Repeat History of Failed Testing
For anyone who still questions the value of testing unified communications, let's look back to a popular case study that illustrates what can happen when something goes horribly wrong.
Most of us remember the fiasco of the federal government's HealthCare.gov launch back in October 2013. It was a technological failure resulting from the unsuccessful attempt to bring together many different pieces of functionality. While the development phase took longer than expected, the team failed to change the cutover date. Instead, it cut the time for testing the implementation right out of the schedule. Team members kept their fingers crossed and hoped that everything would work. Unfortunately, the jump from development right into production led to disastrous consequences.
Hindsight Is 20/20
I live in Minnesota, where the state recently launched a statewide healthcare initiative. Just like the HealthCare.gov team, state developers went live without testing the website or contact center. Various public service announcements for local news organizations actually quote officials saying quite specifically that they cut out testing. In retrospect, they made a huge mistake.
These two similar examples illustrate why testing is so important -- it's the only way you can be sure you don't have a "pig in a poke," as the old saying goes. It will reveal potential configuration problems or underlying balance issues within your environment.
I've heard about a telecommunications environment using an IVR built to handle 600 concurrent telephone calls, but unfortunately, the outside-in service provider's toll-free number circuits could only deliver 300 concurrent calls. That's a classic type of mismatch where one person is responsible for public network connectivity and someone else is responsible for application development and configuration. These are the kinds of issues uncovered when you try to fill the system to capacity where traffic enters and exits.
Verify Components Work Together
The challenge with building a complicated data processing or telecommunications environment is making sure that all the components work together effectively. It involves integrating software developed by many different people for disparate purposes, while also interacting with networks and databases. Each component can work perfectly on its own, but integration can cause issues based on scale and the APIs in use.
For more insight from IR, attend the "Hybrid/Cloud UC: Getting the Transition Right" webinar next Wednesday, Dec. 14, at 2:00 p.m. ET. In this Enterprise Connect-hosted/IR-sponsored event, you'll hear from Zeus Kerravala, founder and principal analyst with ZK Research, and Skip Chilcott, global head of product marketing at IR, on practical ways to get your transition to cloud UC right.
Register now and tune in next Wednesday!
Soak testing, aimed at verifying a system's stability and performance characteristics over a period of time, is crucial because underlying issues might not emerge immediately. Everything might seem great when you get a few users to try out a new telecom system or Web environment. What you really need to do is run the system at full load, at the rate that you expect it to run at production. The system should experience this full load for a sustained period of time. That way you can be confident that there will be no surprises later.
Perform Periodic Health Checks
You must also periodically run a health check against the environment. As time goes by, things just under the surface of an environment will change. Ongoing testing ensures that an approaching deadline won't force you into a peak traffic situation where you enter blindly with your fingers crossed, hoping that nothing has changed since the last time you experienced maximum call volumes.
Periodic health checks involve continuously sampling the availability, functionality, and performance of any customer-facing technology. You need to use a controlled process with a protocol that knows exactly what the system is supposed to do. It will be able to identify components that aren't working as intended and bring them to your attention in a timely manner.
What We Learned
If there's one key takeaway from the HealthCare.gov case study, it's that these two areas of testing are crucial for a successful implementation. Remember, first you must verify all components are working together so people can use them effectively and efficiently. Second, you must confirm capacity on a periodic basis so you'll ensure that the technology is available 24 hours a day, seven days a week. As a result, users won't have to jump into a chat support session or into an IVR and use valuable live agent time.
Whether you're operating a self-service environment, website, or agent desktops, you must make sure everything is properly balanced and integrated from end to end. All components should function exactly the way they're supposed to while under load. If you cut out the testing before you go live, you're effectively cutting away your safety net. Look at what happened to Kathleen Sebelius, former Health and Human Services secretary -- the unfortunate decision to skip testing cost her job.