This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Turn Your Video Collaboration Analytics Into Actionable Data
It’s the age of big data, and all of our systems are generating it. Collaboration environments can track minutes, calls, participants, and geographies. This data tells us what type of endpoints are being used, how long they are connected, if users join the meeting late or leave early, what network they connect to, and how much packet loss occurred during the call. Interesting stuff if you like data, but how does it really help us?
What we strive for is actionable data. We want reports that point us to a problem or opportunity we can carry out. Let’s look at how we can use the data we have to find and fix real user problems.
The investment in a collaboration technology deployment intends to drive an increase in productivity in the users of the system, whether they be company employees, vendors, partners, or customers. Management wants to reduce the friction of collaboration in support of some enterprise goal (increased sales, decreased costs, increased customer involvement or service usage, reduced time-to-market, etc.) Today’s challenge, with most of us working from home, is enabling as many of our workforce as possible to remain productive while keeping them safe.
Success for the collaboration manager gets measured in adoption (how many employees/users are using the system) and usage (how frequently are they using the system and how long are their calls). Of course, these are really a proxy for how well our users are enabled. Longer-term productivity measures (like sales, costs, time-to-market) may give us more direct feedback and should be reviewed, but are hard to measure in the near-term.
Almost every system will tell us how many users are licensed, are using the environment at a given time, the total number of calls per day/week/month, and the total minutes in calls. These are good high-level measurements to provide an understanding of adoption. The enterprise should have a goal in mind of the usage expected, based on the type of work users do, so that measurements can be compared to the objective and thus determine success. These measures are valuable during the roll-out phase and the adoption phase to make sure information workers are learning how to use the system and are changing their work-habits to take advantage of the collaboration technology.
Adoption and usage statistics should show not just overall usage & adoption, but how it breaks down across geographies and departments. That helps determine which areas of the company need more help in their transition to the technology, either by providing more training or by adapting the solution in some way to accommodate their specific needs (better cameras, speakerphones or headsets, upgraded laptops, collaboration boards for conference rooms, etc.)
Adoption charting also helps determine the expected licensing cost and whether the company is getting the expected value for their money. If a block of users is licensed but not using the solution, they will drag down the expected return on investment (ROI).
User Experience (UX)
The biggest factor driving ongoing use is user experience (UX) from start to finish. This includes scheduling a call, joining a call, the quality of the in-call experience (audio and video), the ability to see and share content, and the availability of call-related materials after the call (slides, notes, automatic transcription or recording). What would really be valuable is a score of the UX that could be broken down by various categories of the user’s environment. Being able to spot cohorts of users that are having a poor experience and why they are having a poor experience will allow those issues to be addressed.
Analytics for Quality of Experience (QoE)
Now let’s take a look at the data points we can collect that impact the user experience (UX). Not all components of the user experience are available through the network, but many are. If we artfully connect the available data points, we can get some pretty good guesses on the UX.
Real-time traffic (voice and video streams) require low loss, low jitter, and reasonable latency to provide a quality image or sound. Of these three, packet loss is typically the biggest problem. Jitter isn’t usually a problem on high-speed networks, and we have very little control over latency.
Packet loss is best measured by the video or audio application. Each sending device puts a sequence number in each packet sent, and the receiving end uses those sequence numbers to align them in the right order and to determine if some never arrived. Because the application experiences the entire network path, including the very last connection on each end (which may be USB, Wi-Fi, or Bluetooth), it provides the real truth about the quality of the network path. These statistics are typically available in the call data records (CDRs) for a collaboration solution.
Determining the component of the network path that’s failing isn’t an easy task without additional tools. The enterprise network team may have some in place that can help determine the quality of LAN or WAN paths within the enterprise. Few have any that measure the quality of an Internet connection. Path-based tools (like AppNeta, Nectar, Prognosis or Telchemy) can provide a hop-by-hop result for packet loss showing where along the path packets are being lost.
In lieu of the hop-by-hop loss information, it’s valuable to know as much as possible about where and how users connect to the network. Are they using a Wi-Fi connection? Are they at home, at a coffee-shop or in a hotel? Perhaps they’re connecting via the cellular network. Who is the Internet service provider (ISP) in use for a home user? What was the bandwidth used during the call? What is their geographic location? How far is the home user from their Wi-Fi router? And how did they connect to the collaboration service infrastructure (which infrastructure location and/or which Internet access point on the enterprise network? Each of these data points provides a sorting option that can reveal a specific problem that may apply to a whole group of users.
This group of useful data identifies the equipment each user employs. Is the operating system (OS) Mac or Windows? Is it an older system with limited CPU power or memory? What is the OS revision? Does the user have a separate USB camera, or are they using the embedded solution in their laptop? Do they have a speakerphone or headset? What brand and model number?
The quality of the laptop can directly affect the audio or video creation and reproduction, especially if there is a heavy load of other applications simultaneously running. The quality of the camera or audio gear can likewise affect sound quality, ambient noise pickup, and echo cancellation. Some languages require higher quality reproduction to provide a good interactive experience.
Aggregate Your Data
So how can we use this treasure-trove of data and convert it to actionable tickets? We need something like the mean opinion score (MOS) used in VoIP, but it doesn’t exist for UX. To start, first choose critical metrics for your environment. My suggestions are:
- End to end packet loss is a key indicator and should figure strongly in your combined score.
- Quality of the user’s endpoint equipment—for instance, a Zoom client will provide the CPU utilization of the end-users laptop during a call, which could be an important indicator of the load on that device.
- Quality of the audio and visual (A/V) devices (camera, headset, speakerphone, built-in).
Next, scale each metric, give it weight, and combine them. Packet loss, for instance, is perfect at zero and poor at five percent. We could use a one to 10 scale where 10 is zero percent loss, and one is a five percent loss. Do a similar translation with the other fundamental factors. Now weight each one based on how strongly you believe it impacts the UX. You can finetune these mappings and weights over time to better-match direct user feedback, but use your intuition to get the reporting started.
Create Actionable Data
Look for correlations between known user complaints and indicators in your data set. Do users with direct ethernet connections report fewer issues than those with Wi-Fi? Do users in certain geographies report more complications than those in other locations? Or are folks with older laptops, using earbuds and built-in microphones having a poor experience? Remember, sometimes you need to look at the other end of the connection as the user with no headset may think the video is fine, while the folks trying to hear what they are saying through the PC built-in microphone are suffering.
The goal here is to find structural issues that are affecting groups of users rather than trying to address an individual user complaint. Perhaps the data indicates that users should be issued new headsets or given clear directions on how to connect in a home-office to get the best low-loss path to the Internet. They should also receive training on how to set up the remote-office for a clear picture and better acoustics. You can then address these broader issues through programs directed at resolving the identified challenges and enhance the overall QoE for users across the board.
We are in a data-rich world. Finding the correlation between the information provided by the data available is where we can find actionable insights into how users experience the technology and lead us to changes that can improve it.