How Gigabyte Networks Affect Video Conferencing
Better codecs and bigger pipes will mean higher-quality video, and higher bandwidth could allow for mesh connectivity, making MCUs less important.
H.265 or VP9--these new codecs promise to cut bandwidth use by 50% for our videos. A noble cause, taking into account that video takes up more and more of the Internet's traffic. It also makes it easier to deal with ADSL, where upload speeds are a fraction of download speeds.
But what happens to this whole notion when we migrate our home connections into Gb speeds?
As with any theoretical challenge (especially those related to economy), we will start out by fixing some of the variables so they won't interfere with my meddling. Let's model it this way:
1. The backbone Internet can suck in as much data as we feed it. There are no bottlenecks in the network
2. Our home connection is symmetric, providing 1 Gbps for both download and upload
3. We assume some kind of an HD quality of the video. For our purposes, it can be 2 Mbps video streams (with or without the added benefit of H.265/VP9)
What would we do differently in that case?
The straightforward solution is add robustness to our transmission: if pushing more data is fine, I can multiply my data prior to sending, so that lost packets will not affect the end result--I will be able to reconstruct them with high probability. How much robustness do we add? Who cares? With 1 Gbps, the sky is the limit (it isn't, but bear with me).
Up next come multipoint calls. These today are done in almost all cases by a mediation server--an MCU or video router that collects all sources and then transcodes (or just switches the video), sending the results to the endpoints. Do we really need such a function when we can just directly route our data to all endpoints? Our connection is wide enough, so there is no issue on either sender or receiver end. This would change the whole concept of what we need or don't need on the infrastructure side of the interaction.
Back to reality
The above isn't the case, and never will be. But we need to remember a few things:
1. Moore's Law ensures that our computing power will double every 18 months. This means that if we can't handle a workload locally today, we will be able to in the near enough future to at least think it over
2. Bandwidth is getting higher with time, effectively giving us wider pipes to use
3. Codec technology is getting better, providing us with more quality in less bitrate
This all leads to the fact that our future lies with two trends for real time video communications:
1. Error resilience for codecs will become common place
2. Multipoint video call scenarios will include also mesh architectures, not only centralized ones as they do today--probably even hybrid ones