ABOUT THE AUTHOR


Terry Slattery
Terry Slattery, is a senior network engineer with decades of experience in the internetworking industry. Prior to joining Chesapeake NetCraftsmen as...
Read Full Bio >>
SHARE



Terry Slattery | January 29, 2013 |

 
   

QoS in the LAN? You're Kidding!

QoS in the LAN? You're Kidding! There are a bunch of excuses for not doing QoS and none of them are valid when examined in detail.

There are a bunch of excuses for not doing QoS and none of them are valid when examined in detail.

I occasionally hear from different sources that QoS is not needed in the LAN. A variety of rationales are used, but I think the main factor is the complexity of configuring QoS. I thought that it would be interesting to take a close look at a number of excuses for not using QoS.

We Use High Speed Links
Some people think that bandwidth is a good replacement for QoS. Their statement is often something like this:

QoS configuration is pretty complex, so we just made sure that we have at least X-bandwidth links everywhere. [Replace X with the speed du jour.] With high-speed links, we don't need QoS.

The presumption is that with sufficient bandwidth, congestion will never occur. In practice, this is not the case. Applications are consuming increasingly more network bandwidth. The addition of video content, data-intensive applications, and graphics is increasing the volume of network traffic.

Possibly the worst example that I've seen of this excuse is where the network staff decided that all edge ports should be configured for 10 Mbps Half Duplex! Their intent was to limit the data volume at the edge–somewhat like admission control. It definitely limited the data volume--however, everyone had poor application performance, which didn't help the organization compete in the business world.

Instantaneous Buffer Congestion The high speed link argument doesn't work because of something called instantaneous buffer congestion (also called instantaneous interface congestion). Interface buffers quickly fill due to the bursty nature of IP network traffic. Modern workstations and servers can easily fill a 1-Gbps link, and multiple flows on a common network infrastructure can quickly congest 10-Gbps links. It is only when TCP detects packet loss that it reduces its data rate. A link that is transporting multiple flows can quickly become congested, causing packet loss. QoS is needed to handle the prioritization of packets at the congested interface.

The NMS Doesn't Show Congestion
The next excuse is about measuring congestion:

Our NMS shows low link utilization, so there can't be any congestion.

The problem is that NMS polling happens at such a slow rate that the instantaneous peaks aren't visible in the performance data. Reducing the polling period from 10 minutes to 1 minute will show more detail, but still not enough to show instantaneous buffer congestion. We've seen serious congestion on a 1-Gbps link that the NMS reported as being 30%-40% utilized (at a 10-minute polling interval).

To detect congestion, you should be using the network monitoring system to report interface drops. A drop will occur when the interface buffers are full.

Cisco products typically have a small number of buffers allocated, often 64 buffers. When 64 packets are queued on the interface, the buffer queue is full and any successive packets will be dropped until at least one packet is transmitted out of the buffer.

Looking for excessive interface drops will allow you to identify interface congestion even though utilization figures show low utilization. Be careful though; TCP uses drops as part of its feedback to reduce speed (see the next section for more on this).

Look for interfaces that have thousands of drops per hour. A list of interfaces, sorted in descending order of drop count, is the best way to find congested interfaces. These are the interfaces that should be configured with QoS.

Increase Interface Buffers
At another customer, we heard this story:

We had drops on a key link, so we added interface buffers to reduce the number of drops.

The number of drops was indeed bad. However, increasing the number of buffers on the interface was not the right answer. The problem with this solution is that they added thousands of buffers. The latency due to buffering during data bursts became a major problem. Real-time applications were dropping late packets and TCP was retransmitting packets that it thought had been lost, but were in fact simply waiting in the buffers for transmission.

This was on a 1-Gbps link. The duplicate packets consume network bandwidth with no benefit. We knew that TCP was retransmitting packets because we found large numbers of duplicate TCP ACKs in our packet traces.

It is actually better to drop excess traffic than to buffer very many packets. The right solution is to increase the link bandwidth. However, the short-term solution was to reduce the number of buffers and use QoS to prioritize traffic.

Next page: The Data Center doesn't need QOS?



COMMENTS


Who's Who at Enterprise Connect

NEC

Featured This Week:
Sponsored By


April 24, 2014
Small centers are both excited and nervous about the new wave of innovation converging on them as major disruptive forces - cloud, mobile, big data and social - rock the contact center world. They rea...
April 9, 2014
Recent advances in cloud technology have given rise to wide variety of new tools designed to support contact center performance, staffing and reporting. Join us for a live webinar focused on helping c...
March 26, 2014
As more organizations are focused on maximizing customer experience as a key differentiator to stay ahead of the competition and customer expectations, the focus needs to be on interactions across mul...

Sign up to the No Jitter email newsletters

  • Catch up with the blogs, features and columns from No Jitter, the online community for the IP communications industry. Each Thursday, we'll send you a synopsis of the high-impact articles, podcasts and other material posted to No Jitter that week, with links for quick access.

  • A quick hit of original analysis by the experts who bring you Enterprise Connect, the leading event in Enterprise Communications & Collaboration. Each Wednesday, this enewsletter delivers to your email box a thought-provoking, objective take on the latest news and trends in the industry.

Your email address is required for membership. For details about the user information, please read the UBM Privacy Statement

As an added benefit, would you like to receive relevant 3rd party offers about new products/services and discounted offers via email? Yes

* = Required Field

No longer instrested? Unsubscribe here.