No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Do You Hear What I Hear?

Even in biblical times, it was important that people correctly get the message, as illustrated by the night wind asking the little lamb for confirmation in the classic Christmas song "Do You Hear What I Hear?".

Hearing properly and the quality of audio are very important in UC implementations, especially when business requirements call for the deployment of softphones, software applications running on multipurpose PCs or tablets serving as communication endpoints (as opposed to purpose-built hardware phones).

The challenge becomes monitoring and measuring audio quality. In the Christmas song, the night wind simply asks the little lamb a yes or no question. However for business communications, hearing and hearing well or clearly are often two very different things.

For a very long period of time, the MOS or mean opinion score has been used by telecom personnel to obtain a person's subjective opinion on the quality of a voice connection. The MOS score ranges from a low value of 1, indicating a very annoying audio experience, to a high value of 5, which represents virtually no audio "flaws". As you gather scores from multiple people, you compute the MOS score by averaging the individual scores. A typical Voice over IP (VoIP) implementation scores between 3.5 and 4.2.

Polling multiple people has shown that different codecs (compression and decompression algorithms) yield slightly better or slightly worse MOS scores. For instance, the G.711 codec typically yields a higher MOS score than does the G.729 codec. Of course, the G.729 codec can take 1/8 of the bandwidth G.711 does. As is typical in life, you need to understand and make the right trade-offs.

Beyond deriving a numeric audio quality score, I would suggest--based on several projects where softphones have been deployed to thousands of people--the following rules to ensure that your end users remain happy:

1. Set expectations correctly.
In almost all work environments, wireless connections are subject to many factors that may diminish the overall audio quality. Even with well-planned and configured wireless access points, interference from other devices, including employee personal devices, microwaves and other office equipment can temporarily disrupt audio quality. A large concentration of users connecting to wireless in a single spot, for instance a large meeting room, can overwhelm standard access point configurations, leading to audio degradation. Unless you have tested, re-tested and stress-tested your wireless network, you may want to let users know that for the "best experience," a wired connection should be used.

When working from home, a local coffee shop or a hotel, users should understand that the network will impact the overall sound quality. Where possible, users should be aware of network quality indicators (for example the "signal strength bars" in the Lync client), and potentially may want to make a test call before joining an important meeting remotely.

2. Implement QoS where you can.
All VoIP solutions require networks that expedite voice packets on both the WAN and LAN in order to provide great audio quality. Even though a variety of codecs attempt to deal with less-than-ideal network conditions, you should always implement and test (!) QoS (quality of service) and CoS (class of service) on all the network segments you control.

3. Use approved headsets and devices.
Each softphone client is designed to work with specific headsets. Make sure you buy the right headsets for your users. Not only will the audio quality be better, but the overall user experience will improve, as all the "special" buttons on the device will likely work as expected (e.g. volume, mute, disconnect, etc.).

If your users plan to purchase devices themselves (e.g., Bluetooth headsets) you should provide a list of devices that will work well (and that you have tested) in your environment. Your support desk should also have access to sample devices so they can better assist end users who are having difficulties.

4. Provide headset-specific training materials
Don't assume your users know how to properly use a provided headset. For the current project I am leading, headset-related issues remain the largest single category for support calls. Make sure your training materials address issues such as:

a. How to select which device is used for audio.
b. How to control the call volume; how to mute a call, etc. using buttons on the headset.
c. Can I plug in a headset halfway through a call?
d. Can I set my laptop speakers to ring, so I don't need to be wearing my headset to hear incoming calls?
e. What is the expected range for Bluetooth headsets? What other devices may cause interference?
f. Can I pair my Bluetooth headset with both my laptop and my mobile phone?

5. Provide a feedback mechanism.
Even when you implement, test and measure an excellent audio quality solution, some users, sometimes, will perceive audio quality issues.

These issues may be self-imposed; the user may not have "followed the rules" or may not have used an approved headset, but in any event, in order to maintain and support the perception of a strong solution, I would suggest you provide your users with a direct and simple method to report problems. And then, I would strongly recommend that you implement a process that investigates and follows up with all audio issue complaints, especially in the early stages of a UC deployment. I have found that actual or perceived audio quality issues, left unchecked, can develop into rumors and innuendo that can derail a UC project.

I would like to close on a philosophical note and suggest that regardless of how we measure the quality of audio, listening and working to understand each other will always remain more important than simply hearing.