When it comes to determining bandwidth needs for a particular application or service, let alone an entire organization, you'll find no easy answer but rather a slew of opinions and numerous calculators.
This is a persistent problem. A couple of years ago, in the post "The Broadband Availability Gap: Traffic Modeling Challenges," I noted the FCC findings about the difficulty of traffic engineering:
Herein lies the key issue: Vendors and network services providers all have their suggested bandwidth recommendations and, for the most part, they aren't holistic approaches to traffic modeling.
The discussion often comes up in the education vertical for online curriculum efforts, assessment testing, and one-to-one initiatives. And it comes up regarding UC, conferencing, video, and other network applications.
The use of cloud business intelligence (BI), I believe, could help improve traffic modeling. In another post, "Cloud BI: Visualizing the Way to Better Business," I noted that the move to the cloud is about the ability to do business on a higher level. In the case of using cloud BI to improve traffic modeling, a discussion I had with an IT services firm in the Washington, D.C. area about firewall monitoring using the cloud comes to mind. With firewall monitoring, data is reported includes egress and ingress traffic and total throughput in Kbps (see graphic below). A next step could be setting up a visual dashboard showing the WAN links, and egress and ingress traffic displays showing actual throughput and percent occupancy of each link. Adding in the many other data sets for types of traffic and their representative usage, will help an organization paint a picture of bandwidth consumed.
Links, whether between switch stacks, servers, routers, or ISPs, are only part of the overall picture needed when predicting traffic. While knowing the occupancy of a WAN link is nice, you can't stop there and assume that because you're only using 60% of the available bandwidth that you have enough bandwidth. The demarcation, as I've mentioned in previous articles, is soft. Hammering out latency doesn't always mean adding more WAN bandwidth, a common mistake organizations make when trying to fix an ailing network.
A few years ago, in a really cool eight-part blog series on application performance management (APM), Gary Kaiser of Dynatrace, cited the 12 potential bottlenecks for performance. He made some key points about what it is that we should understand about what we are measuring. One of his observations, for example, is that operation performance sits at the intersection of business and IT metrics. Organizations with APM or network APM are bound to reduce mean time to repair/restore and invest less legwork in resolving network issues, he said.
While this all impacts network performance and user experience, it doesn't answer the bandwidth question. From both sides of that loosely defined demarc point, isolating bottlenecks and reducing or minimizing latency will play a role in bandwidth demand. Predicting the amount of traffic and peak bandwidth requirements is still an estimate; the science is an ongoing effort. Examining only WAN link utilization will not give the proper measure of how much bandwidth you need, and relying on ratios and other novel approaches won't work so well either.
Instead, focus on traffic flow, reducing latency, and implementing "best" or better configurations to assure that traffic moves across the network effectively and efficiently. The argument for more bandwidth is akin to the argument for more spectrum, and the alternative to either is the improvement in efficiency. Anything that is measurable can be improved, and looking at packet flow and what impacts performance is a better hedge than just adding bandwidth.
Follow Matt Brunk on Twitter and Google+!
@telecomworx
Matt Brunk on Google+