No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Real-Life Lessons Learned from Hurricane Michael, Others

Being a Florida boy, I’ve been through many hurricanes over the years -- the last 54 years, to be exact. The worst of these has included Hurricane Andrew (Category 5, Miami, August 1992) and Hurricane Ivan (Category 3, Pensacola, September 2004). In fact, Hurricane Ivan was part of a four-hurricane flurry in our area during a 15-month period in 2004-2005.
 
In my 31 years in telecom, I’ve had clients dealing with storms, too. It seems that almost every year, I have clients in Florida or along the Gulf Coast that must deal with storms. Today, I wanted to take you through my clients’ experiences with hurricanes so that you can learn from them and possibly help your organization become better prepared for any disasters that cross your path.
 
Recently, I’ve been working with clients in New Orleans (it’s hard to forget Katrina, Category 3, 2004); in the Panama City, Fla., area (Michael, Category 5, October 2018); and in the Florida Keys (Irma, Category 4, August-September 2017) that all have dealt with catastrophic storms. These storms have had different characteristics (wind event, flooding, major carrier outages, etc.), and each affected my clients in different ways. So I thought it might be helpful to learn how different disasters have caused different problems, in different areas, for different carriers, and different clients.
 
Hurricane Michael
Hurricane Michael hit the east side of Panama City with a mighty smash -- Category 5 winds of up to 155 mph. Michael flattened everything -- road signs, billboards, trees, fences, power poles -- not to mention the devastation it brought to many homes and buildings. From a telecom standpoint, it wreaked havoc, too. The storm’s biggest telecom victim was Verizon Wireless. Customers left the carrier in droves to find reliable wireless service. However, AT&T wasn’t without a black eye. Its Internet access was down for a portion of customers too, even for some living as far as an hour away.
 
Lesson learned: Even major carriers can suffer significant outages in a storm.
 
One of my clients reported that its Windstream voice service was down for weeks. However, the client didn’t immediately put in a trouble ticket with the carrier, so it’s hard to say how long the network was actually down. Once the client did submit a trouble ticket, Windstream restored service in 24 hours.
 
Lesson learned: Submit a trouble ticket, even after a major storm. Don’t assume that your service will be restored along with the rest of the network.
 
Telco lines on power poles suffered the biggest impact. Not surprised? In all my years in dealing with telecom, this was by far the biggest impact to aerial fiber. This was an issue during Ivan, for example, but not as crippling to the networks.
 
And some people forget, a cellular signal received by a cell tower immediately travels down the cell structure to a land-based fiber network that carries the signal back to the carrier’s switching centers.
 
The argument for aerial fiber is this: As a carrier told me, buried cables are four times more likely to have an outage compared with aerial cables. Considering digging/backhoes, that makes sense, right? So in most places, there’s a strong argument for using aerial cable. Second, modern networks rely on self-healing rings for re-routing around failures, whether those are related to aerial or buried fiber.
 
So here are the issues that came with this storm:
  1. The storm knocked down so many poles that both the primary and the backup network paths had multiple outages. So no re-routing could take place.
  2. The telco fiber was riding on power poles; the storm snapped in half or knocked down many of these, too. A route from a customer’s main location to the carrier’s central office might have included five, 10, or even 20 fiber breaks/outages.
 
While waiting for the utility company to repair or replace the power poles, the carriers are normally able to splice/repair fiber lying on the ground so they can get it back in service. However, in this case, as the power crews worked to stand up or re-install new poles, they often cut the fiber a second time (not sure if this was on purpose or not, but it happened often!). So a fiber outage that a carrier repaired and got working in the morning would be out of service again in the afternoon. And this affects the telco carriers… and the wireless carriers… and creates overall havoc.
 
Lessons learned: 1) Understand your carrier’s infrastructure (routes, rings, aerial vs. buried cable), and 2) don’t rely on just one technology. Some companies provide dual SIM wireless devices and satellite is available for data needs (satellite phones have their purpose, but VoIP via satellite… not so great).
 
Hurricane Irma
Another client, this one in the Florida Keys, experienced Irma’s fury. There’s basically one way on and off the keys from the mainland of Florida. All power and telco lines travel this same right of way. So high winds and storm surge can often take out bridges, power poles (with telco lines), and wreak havoc on communications.
 
In my client’s case, most communications services were surprisingly working after the storm passed through on Sunday, Sept. 10. However, by Monday morning, most of the communications services were no longer working (AT&T Wireless, Verizon Wireless, AT&T VoIP/Internet services, Comcast telco/Internet services, and others).
 
The few things that stayed in service were AT&T copper phone lines (basically fax lines the company had re-purposed) for local voice calling on the islands, Sprint Wireless phones, and Motorola radios. The company also used some satellite phones, but with limited success.
 
Regarding the outages on Monday, my client surmised that the carriers’ generators ran out of gasoline overnight. The carriers told the company that wasn’t the reason, but provided no alternative explanation.
 
Lesson learned: A carrier’s ability to maintain services during outages may last only as long as its own backup power.
 
Hurricane Katrina
Finally, for my New Orleans client, the massive flooding that besieged the city following the breaches in the Lake Pontchartrain levees swamped its HQ office, including its phone system located on the building’s first floor, and several other locations – the company lost one office completely. The company’s ability to serve customers was severely limited, primarily because the main call center (normally located at the HQ location) and phone system were out of commission.
 
It took some time, but the company eventually set up a call center at a third-party location so it could serve customers.
 
Lesson learned: The company has since moved its phone system to the second floor. In addition, we’re re-architecting its network with diverse carriers and multiple technologies for more flexibility and resiliency. In addition, the company will have the ability to take call center calls from any location.
 
Best Practices
I could go on with other examples, but the lessons learned I noted for each are applicable to any hurricane -- or really any other disaster -- scenario. So, too, are these best practices:
  1. Don’t rely on one vendor. There are many vendor options out there. Have at least one -- maybe two -- backup vendor or vendors.
  2. Don’t rely on one network technology -- it’s just not enough, and you have lots of options (MPLS, Ethernet, VPN, SD-WAN, wireless, satellite, copper, fiber, etc.) for carrying voice and data traffic.
  3. Be flexible! You need to build flexibility in your infrastructure and into your planning, so that you can react and adapt to the unique set of circumstances that may come your way -- some of which you’ll be able to anticipate and some of which you won’t expect.
  4. Locate your disaster recovery site at least 200 miles away (preferably further). I know it’s more convenient to visit a disaster recovery site that’s only an hour to two away, but power outages, flooding, and wind might take out your backup site if it’s too close.
  5. Planning -- this, of course, is a #1 priority. Be aware of the common practices of planning for disasters of many types (short duration, long duration, wind events, flooding events, fire, active shooter, etc.) -- and think about what’s unique to your situation/location that deserves special consideration.
  6. Testing -- I can’t say this enough. Even when you think you have a terrific plan or infrastructure in place, if you don’t test it regularly (at least annually, maybe quarterly), you’ll have problems and miss the opportunity to execute on your great plans.
  7. Expertise -- get help. If you don’t have this expertise on staff, don’t go it alone; there are both vendors and consultants that can give you the insight that you’ll need to give your organization the best chance to weather these storms (or other disasters).
With each disaster comes a set of new lessons and data points. If we can learn from these disasters and apply some good fundamental principles, we’ll be in better position to withstand the next disaster and whatever it may bring.
 
SCTC logo

"SCTC Perspectives" is written by members of the Society of Communications Technology Consultants, an international organization of independent information and communications technology professionals serving clients in all business sectors and government worldwide.