No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Speeding Up the Internet

New ideas for transport aren't limited to protocols. If we could only repeal the laws of physics, we'd eliminate the speed of light as a limitation on electronic communications. Unfortunately, we've not been able to eliminate the speed of light limitation.

However, by analyzing network traffic, some researchers have postulated that by simply changing the type of physical links we use for a subset of all Internet links, we can improve the performance of certain types of Internet traffic. In a paper titled Towards a Speed of Light Internet , a team of researchers suggested that using terrestrial microwave links for specific types of traffic can result in significantly improved Internet performance.

Their research showed that fetching the index pages of popular websites took a median time of 35 times the speed-of-light round-trip time. That's a lot of time and a lot of data. Their microwave link proposal could reduce the time more than three-fold, which would leave a lot of time to try to optimize. The report authors make it clear that the remaining time has to be handled by using smarter protocols.

For some applications, the side effects of TCP and UDP functionality are not desirable. For example, TCP delivers data to the application in sequence, meaning that if a packet is lost, all successive packets must be held in kernel buffers until the lost packet is retransmitted and ultimately received. This is called "Head of Line Blocking" because the missing packet at the head of the queue is blocking access to the successive data. That's not so bad for a single data stream, but if we have an application that can use multiple data streams to get improved performance, we have a problem.

Another problem with TCP is that it only provides a byte-stream model. An application that needs to send individual requests or data records must add its own internal markers to delineate the requests or records. In addition, application developers must know when to call the TCP API to push any existing data into the network (this sets the TCP PSH/push bit). Wouldn't it be nice if the network APIs provided a standard mechanism to send and receive complete message units?

TCP is tied to a connection-oriented model that identifies a connection as a four-tuple of source-IP, source-port, destination-IP, and destination-port. If the interface on one of the connection endpoints goes down for any reason, the connection dies and must be re-established even if other network interfaces are up and operational. We need something that seamlessly transitions to alternate interfaces where possible.

UDP has its problems, too. It isn't reliable, so reliability must be added for applications that need it. Also, there is no inherent message identifier, meaning that if more data must be sent that fits into a UDP datagram, the application must handle the fragmentation and reassembly.

There have been a number of new protocols implemented and proposed in recent years that can be faster and are worth investigating:

The goal of SCTP is to provide better mechanisms and control for communicating between networked systems. It is documented in RFC4960, which includes a nice description of TCP's problems as well as how it addresses those problems. While SCTP was originally designed to transport PSTN signaling across a network, it was also selected as the data transport protocol in WebRTC.

SCTP is a connection-oriented protocol, but with more functionality that allows application developers to customize its operation. Multiple sessions can be multiplexed over one "connection" without Head of Line Blocking. It uses multiple network interfaces as a set of endpoint identifiers so that multi-homed systems can maintain resilient network connections. Messages can be sent and received by the applications, so no additional mechanism is needed to delineate application transactions. The software can even control what happens with packet latency and loss.

SPDY is not an acronym; it is pronounced "Speedy" to convey the idea that it is a fast protocol. It was primarily created by Google and is being replaced with HTTP/2, which is based on an early version of SPDY. It is functionally an HTTP tunnel, encrypting and compressing HTTP headers and data on the wire. SPDY itself is being deprecated since the HTTP/2 protocol provides standards-based equivalent functionality. The compression functionality reduces the volume of repetitive data that transits between Web clients and servers, helping reduce the total traffic volume on the Internet.

What SPDY doesn't do is tackle the fundamental problem of reducing round-trip times.

Also developed by Google, QUIC was created to eliminate the source of multiple round trip times between client and server. (See this Chromium Blog post for more information). As is indicated in its name, it operates on top of UDP. There is no three-way handshake as there is with TCP. Using UDP as its transport has two interesting benefits:

If two systems have been talking with one another, no three-way handshake is needed. Simply start transferring data. Zero round-trip time! Under good conditions, the result was a slight improvement over TCP. However, under poor network conditions, the new mechanism is vastly better than TCP. In the Chromium blog post, Google, reported 30% less time spent waiting for YouTube video to be buffered when viewing videos over QUIC.

QUIC has now been incorporated into the Chrome browser, and about half of the traffic from Chrome browsers to Google servers uses this protocol. This shows that Google is getting real performance data as well as demonstrating how easy QUIC is to deploy and use.

What does all this have to do with unified communications? The YouTube example in the QUIC section should provide a hint. Good quality voice and video can reach more people, and great voice and video can be provided to those who already have excellent network connectivity. The increase in video use can be mitigated in part by these efforts to make the network more efficient.

Of course, the benefits aren't without some concern. These new protocols are unfamiliar to network staff and maybe even to the tools. SPDY and HTML/2 will be encrypting and compressing data on the wire, so tools like Wireshark will have more difficulty examining the payloads. [Note: Wireshark already has a QUIC protocol decoder.] Other network tools like application performance management systems will have to include new protocol modules.

Network engineers will have to become familiar with how the new protocols function so that they can do the necessary troubleshooting when things don't go as planned. It looks like some new education is in the near future as these new protocols roll out. Combine these changes with the changes that NFV and SDN are bringing and the old Chinese proverb rings true: May you live in interesting times.