This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Achieving QoS in a Hybrid Cloud Implementation
Quality of service, or QoS, is important when mixing real-time and bulk traffic. Add big data applications and the challenge grows. Let’s look at strategies that we can use to protect real-time traffic in a hybrid cloud environment where end-to-end QoS may not be possible.
I define a hybrid cloud as a combination of an enterprise on-premises cloud system and a remote, vendor-provided cloud system. The on-premises systems typically support either infrastructure or platform delivered in the as-a-service model, while the vendor systems could provide a variety of services (infrastructure, platform, data center, or software). In a hybrid cloud, applications might have components located on premises or externally. An application that has real-time communications requirements between sites should be prioritized over non-real-time traffic.
You may also have a software service, such as VoIP, that has real-time components. Somehow, you must connect your voice endpoints within the enterprise to the voice control system service. Call control services typically have less critical timing constraints than real-time streams going to conference calling services located in a cloud provider’s infrastructure.
No QoS over the Internet
QoS is normally used to prioritize different types of traffic, relative to each other. The process involves classifying traffic by marking packets with either a class-of-service (CoS) or Differentiated Services Code Point (DSCP) identifier. Once packets are marked, the network uses the embedded CoS/DSCP identifier to perform rate limiting and prioritization for forwarding. Time-sensitive packets get transmitted before less-time-sensitive packets. A QoS design typically has four, eight, or 12 different classes.
The problem on the public Internet is that there are too many competing interests for this simple mechanism to work. Therefore, the Internet typically uses a weighted fair queueing mechanism that favors low-volume traffic flows. This mechanism works well for highly interactive applications with low data volume. Voice traffic typically has a low enough volume that it gets good treatment, especially when compared to things like Web page updates, image files, and streaming video. This is as good as it gets on the public Internet.
Multi-Protocol Label Switching (MPLS)
MPLS technology allows carriers to provide a virtual network to each customer. An eight-class mechanism is available to prioritize traffic within each customer’s virtual network. While this mechanism doesn’t differentiate between customers, it does allow each customer to map its internal CoS/DSCP classes into the MPLS priorities.
Because MPLS is less expensive than using dedicated leased lines, it has been the preferred WAN technology for some time. However, some newer, less expensive technologies have begun supplanting MPLS.
Dedicated On-Ramp Providers
A new class of Internet service provider (ISP) and hosting provider has emerged to facilitate cloud connectivity. These carriers have connections to Internet exchange points (IXPs) where the big cloud providers also have connections. These facilities are where carriers and big companies interconnect their networks. Examples are ISPs, wireless carriers, and big companies like Facebook, Amazon, and Google.
The new class of ISPs connect to the IXPs and then sell either dedicated or MPLS links to enterprise customers. These links can handle QoS and provide a high-speed connection directly to the major cloud hosting provider of choice.
Click below to continue to Page 2: SD-WAN, Other Factors, Summary