Multiple T1 or E1 links can be combined to provide a logical link with a higher bandwidth. But how those links are combined can affect the behavior of the network and the quality of the voice or video streams running across them. There are a few choices on how to do this and the answer won't be the same for every enterprise.I am working with a customer who has a large main office and many small offices distributed around the country. They are moving from a Frame Relay WAN to an MPLS-VPN based WAN and want to move their video conferencing from ISDN onto this new IP-based infrastructure. Each remote office is currently supported by a single T1 line, and a second T1 line is carrying the voice and ISDN for the video. The current plan is to convert the second T1 to IP traffic and combine it with the first link to provide about 3 Mbps of bandwidth to each office.
The two lines can be combined to look like a single link using either Cisco Express Forwarding (CEF) or Multilink Point-to-point Protocol (MLPPP). CEF gives the user a number of ways to balance the traffic load across the two links.
Method #1 is to assign packets to a link on a round-robin basis. Thus the first packet goes down link-A and the second down link-B and so on. The router at the receiving end takes the packets and puts them back into a single stream and forwards them on. Unfortunately, CEF does not guarantee that the packets will remain in the order they were received. If one of the two links has a longer delay than the other, it is quite possible (and common) for packets to be reordered.
The definition of an IP network is that it does not guarantee packet ordering. But in most network situations packets do in fact arrive in the same order they were sent. Sometimes there is a small amount of reordering when a flow is just starting, or if a route change occurs in the network. But most of the time the packets are arriving exactly as sent. TCP stacks automatically reorder packets upon arrival so they are delivered to the application in the right order. TCP usually has big buffers so even a large amount of reordering will not cause a failure, although it can cause performance degradation.
Voice and video endpoints, however, are more sensitive to packet reordering and may run out of buffering or the necessary CPU power to reorder packets that are constantly arriving out of order. So once again the voice and video application becomes the canary in the coal mine flagging a problem that must often be addressed at the network level. One option is to change the CEF algorithm so that it assigns a flow to a specific T1 link. This works OK for voice streams. The call between phone A and phone B always uses T1 link B, and so all its packets traverse the same link and are forwarded in the same order. Phone calls have relatively low bandwidth and high counts and so it is also likely that those calls will end up fairly evenly split across the two T1 links and the load will be balanced.
We care about the load being balanced because we have assigned an aggregate bandwidth limit for real-time (voice and video) traffic. This bandwidth limit applies to both links together, not to one or the other. But on the router it will be implemented as half the bandwidth on each T1 link because there are two physical output queues where bandwidth is being measured.
For the video application this causes problems, because the video has much higher bandwidth per stream. Suppose we want to be able to support two 384Kbps video conferencing calls from this remote office. A 384K call actually uses 460Kbps at the network level. To support two 460K streams we need to assign the video class of service (2 * 460K / 3 Mbps =) 31% of the link bandwidth. If this is split evenly across the two links, there will be sufficient bandwidth if and only if the two simultaneous video calls are assigned to different links. CEF does not guarantee this, as it does not distinguish between a video call and any other IP flow. If three flows come up in a row (video, web, video) then the two video calls will be assigned to the same T1 link.
So what to do? Our third option is to use MLPPP. This protocol assigns a sequence number to each packet crossing the virtual link, and then reorders packets on the receiving side to ensure they are forwarded in the same sequence as they arrived. This is usually my recommended solution. However, as my Dad used to say when confronted with these issues, "There is no such thing as a free lunch." The processing overhead required to resequence packets on the receiving end is usually done by the router CPU and can cause a significant burden. The router at the end of the MLPPP link may need to be upgraded to ensure it has sufficient resources to handle the extra work.
So if you have two or more T1 or E1 links to your offices and you want to support voice and video, take a close look at the tradeoffs and chose the right algorithm and the right hardware to make it work for your situation.