No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Using AI to Win the Video Bandwidth Battle


Picture of video call about some colleagues in officer, others remote
Image: Blue Planet Studio -
Before desktop video became table stakes in enterprise collaboration (i.e., before the pandemic), video conferencing vendors often sought to differentiate their products based on technical considerations that appealed much more to IT buyers than end users. Given video’s nature as a high-bandwidth, delay-sensitive application, much of the focus was on bandwidth consumption, and competition in this area tended to manifest in the form of codec wars: which codec was more bandwidth-efficient, and whether proprietary codecs’ potentially better performance outweighed their lack of ubiquity.
The codec wars had faded even before the pandemic, but what drove them — bandwidth constraints — is now returning in the era of ubiquitous work from home (WFH). Off-hours scheduling — when kids, spouses, or roommates aren’t online doing their own bandwidth-intensive activities — has become a standard piece of advice to anyone planning a critical video.
In an office setting, the solution to this problem would be simple: Throw bandwidth at the problem. But for a distributed WFH workforce, that’s not practical. Individuals may not be able to afford (or have the option) to increase their home bandwidth, and upgrading hundreds or thousands of individual residential links may not be practical for the enterprise, either.
Luckily, there’s a solution on the horizon, and it doesn’t even involve a new round of codec wars. In a No Jitter post last week, Zeus Kerravala, of ZK Research, wrote about Maxine, a new set of cloud-based, AI-driven services targeting the enterprise collaboration market from Nvidia, the leading manufacturer of graphics processing units. As Zeus describes it, Maxine’s AI uses many of the techniques that video codecs employ to conserve bandwidth. And its “cloud-native architecture enables massive scale for video meetings, many of which are now reaching hundreds or even thousands of concurrent people,” Zeus wrote.
This isn’t just about WFH, either. It’s widely assumed that when workers return to the office, they’ll be interacting with a more dispersed set of colleagues and partners, which means they’ll continue using video as their primary real-time collaboration medium. That means there will be a lot more video traffic running on corporate networks. Whether it’s the kind of single, massive meeting Zeus described, or many concurrent smaller video conversations, the result will be the same: more video traffic fighting for space on the network.
AI is poised to transform video meetings in many ways, some of which Zeus recounts when he describes other Maxine applications. It can make “camera work” better, produce real-time transcriptions and, eventually, even translations. But none of those features are much use if the connection itself is bad because of inadequate bandwidth. In fact, those kinds of features will consume bandwidth themselves, making the problem worse.
So prioritizing bandwidth efficiency is the first step in delivering better video meetings. Video vendors will be able to use Maxine to provide this better performance in their products, without resorting to another generation of codec wars.