No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Complexities of Web Performance

Users of the Internet have very little patience, especially when it comes to looking for information or interacting with a website to make a purchase or reservation. The speed and responsiveness of a website is the true litmus test of whether users will continue to use it or simply look for an alternative site.

Websites and the supporting infrastructure are very complex with a lot of moving parts. There have been volumes written about all aspects of web infrastructures, covering many topics, ranging from code optimization to network design. The purpose of this article is to focus on one element of web performance--the network-based metrics that contribute to the amount of time spent before a single byte ever arrives at the user's web browser. It is important to understand these components, especially for an enterprise that develops and hosts a public facing website.

In total, five steps occur every time a user makes a request to a website for the first time. These steps are all required, with the possible exception of SSL handshake (not required if HTTPS is not used), so it is really important to understand them. In addition, I am going to share some optimization techniques and best practices to help minimize the time spent on each of these steps.

The first step is the amount of time required to make a domain lookup. This is the starting point of any request made on the Internet, regardless of the requested service. It is the process in which a host name (www.nojitter.com) is converted to a numeric address (192.155.48.108). The key point is that not all DNS services are equal. Using the services of an overloaded DNS server can have a negative impact on the end-user experience. It is estimated that the average site today references approximately 78 external links, so any added delay can be compounded very quickly.

One way to mitigate slow DNS performance is to make use of a third party DNS provider if possible. Another way is to use a longer DNS Time to Live (TTL) value. TTL is a setting for each DNS record that specifies how long a resolver is supposed to cache the DNS query before the query expires and a new one needs to be done.

The next step is amount of time required to establish a connection between a user's web browser and the web server. Unfortunately myriad factors can affect this, many of which are beyond the control of anyone due to varying performance characteristics of the network. However, an interesting point to be aware of is what happens when the initial request (the SYN packet) gets lost or dropped. Depending on the operating system used to initiate the request, the retry interval time can vary between one second (MAC/Linux) to three seconds (Windows).

After a connection is established, a common occurrence is that a user's session will be redirected to another web server. The problem is that if the initial request is redirected across another domain, the DNS lookup and TCP connection process must happen again, thus adding more time. If possible, redirects should be minimized, or landing page redirects should be made cacheable.

Websites that require a secure connection (i.e., e-commerce sites) must go through the process of an SSL handshake, which takes additional time. In order to minimize the amount of time to complete the handshake process, the web server should include a complete SSL certificate chain. When this does not happen, multiple round-trips must occur to complete the process, which incurs additional delay.

The last step in the process is the amount of time for the first byte to be delivered to the user's web browser. Two aspects can slow this process down. The first is the use of large cookies. If a cookie is large enough it may have to be split into multiple packets, which in turn can become more susceptible to packet loss or delay. The best way to mitigate this issue is to minimize the size of cookies.

The second aspect is server processing, from the perspective of how web requests and HTML documents are processed. There are a number of things that a web server can do in order to maximize efficiency and minimize processing time. Unfortunately this is a complex topic and is beyond the scope of this article.

Although each of these steps seem small, when added up they can account for a significant amount of time. Overlooking these steps or assuming they have been optimized can risk the success of your company's website.