This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Is Everyone Buying Too Much Bandwidth?
While I spend much of my time doing analyst work and looking at new and emerging technologies, divining future directions and helping vendors understand the impact of these changes on their product lines, I still get out and do "honest" work once in a while. Most of that work deals with helping enterprises understand what is going on in their networks, how much of it is going on, and helping them optimize their network spends. What I keep learning is that putting your company's checkbook on auto-pilot is a great way to get taken for a ride.
Devil in the Details
I do a lot of this work with my associate, Ryan Larsen of Urban Technology Group, and we have developed a basic formula for it. Ryan is great on detail work, so the first phase of a project typically involves him going through the client's telecom bills and contracts and creating a detailed inventory of the services to which it's subscribed. His inventory includes carriers, locations, service types, speeds, monthly costs, contract end dates, circuit numbers -- you name it.
Surprisingly, most organizations don't have this information readily at hand. They can tell you charges by account codes and where the checks are going, but that's about it. We learn some of the darnedest things each time we go though one of these exercises.
At one client, for example, we found a few mystery PRIs -- nobody knew exactly what or where they were, but the company was paying out more than $1,000 per month for them. We eventually got together with the most tenured IT guy and figured out that these PRIs were associated with an old Nortel Meridien SL-1 PBX. Thing is, the company had replaced that SL-1 with a Cisco UCM several years prior. Later we found that same client had been paying more than $8,300 per month for a coast-to-coast DS-3 that had no traffic on it.
Funny thing is, this case is nowhere out of the ordinary. On most of these assignments we find the ever-reliable accounts payable system issuing checks every month for stuff we discover has been disconnected and the applications shut down years ago. During the discovery phase Ryan generally finds anywhere from 5% to 30% of savings in services they've been paying for but not using.
Turning to the Traffic
Fortunately, I don't get involved much with the bill and contract review work (I'm patient, but nowhere near that patient). Where I get involved is in the second phase of the project, which involves determining how much of this stuff they actually need -- or, in more formal terms, "traffic optimization." This involves looking at traffic information the client often has but rarely bothers to look at -- there are only so many hours in the day. It's hard to find a company of any size that doesn't have some type of traffic measurement capability. It's even harder to find one that looks at it on anything close to a regular basis (i.e., more often than "never").
I have been doing traffic analysis since we were dealing with estimating response times on IBM 3270 interactive networks and computing file transfer times on dial-up bisync batch terminals. In those days, the analysis involved things like figuring out how many characters were in a block (including all of the protocol overhead), multiplying by eight to figure out how long it would take to send that block with a 4,800-bps modem, ACK message length, and adding in the link propagation delay plus a reasonable percentage of retransmitted blocks. These are skills I hope I never have to resurrect.
That was the data side of things, but all of us who went through Bell System training back then also had to learn how to do those Erlang computations along with the ancillary skills like estimating busy-hour traffic if it wasn't available (i.e. 17% of the total day's traffic), and any number of other arcane machinations to determine how many analog trunks we needed. If you need to do this today, you can find an online Erlang B calculator here.
I will say that the network monitoring tools today are way better than what we had back in the day, but in my experience, the network designs are way worse. I'm regularly seeing 100-Mbps Internet access links running at a sustained data rate of 5 Mbps and a peak rate of 10 Mbps. Conservatively, that's 10 times as much capacity as you need!
MPLS networks are typically more predictable as much of that traffic is voice, but still we see gross overbuying. I looked at one site recently that had three 100-Mbps MPLS connections. The sustained traffic on the heavier side (which in this case turned out to be outbound ) was 2 to 5 Mbps. Peak traffic was actually outside of business hours, so was clearly some type of file back-up operation (i.e., "non-time sensitive") and it never exceeded 21 Mbps. That was the only thing going on at that time of the day, so obviously the thing that was generating the traffic couldn't spit it out any faster than 21 Mbps, but the customer was paying for 300 Mbps of MPLS capacity.
My favorite bad example is a SIP access connection for 400 simultaneous call paths, the only thing that had been running on a 500-Mbps pipe. We estimate a 64-Kbps voice connection carrying 20-msec voice samples will require about 90 to 100 Kbps of bandwidth. At that rate, 400 simultaneous calls equals 36 to 40 Mbps -- less than 10% of the capacity assigned to that access connection.
Again, the crazy thing is that these numbers aren't out of the ordinary! Even after we disconnect the 5% to 30% of stuff we find that a client isn't even using, we can still reduce link capacity sizes and meet peak demand and provide for reasonable levels of fail-over capacity. Overall, we can reduce spend by 25% to 50%.
Having seen the same scenario play out time after time, I take it as the "new normal" in IT. Organizations have now cut back so far on their IT budgets that there's really no one around who's got the time to do this kind of basic analysis. So long as users aren't complaining, IT will just buy a bigger pitchfork to toss hundred dollar bills onto the bonfire.
Maybe I'm getting cranky in my old age, but I abhor unnecessary waste. Sure IT has many important priorities to address, but when I came into the field our standing orders were: "Spend the company's money wisely." IT has taken so many lumps for poor execution (what I would suggest is also due to the fact that we're routinely expected to do way too much with way too little) that the performance expectation may be slipping from "insist on excellence" to "minimize the screaming."