This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Network Impact of Moving Collaborations to the Cloud
When an enterprise transitions from an in-house to a cloud-based collaboration infrastructure, the network paths (and thus the loads) often change. This creates challenges on the internal network (e.g., an MPLS core), on Internet access points, and how we manage transport quality. In my last article, I discussed the eight steps to moving collaborations to the cloud, so now, let’s look at the network changes.
Today, enterprise networks are carefully designed to support collaboration traffic, especially audio and video. QoS is used to prioritize audio and video traffic as it transits the WAN. Network connections are sized to support enough bandwidth in the appropriate QoS classes to accommodate the peak bandwidth demands from each office. Likewise links to the data center, where traffic aggregates as it connects to audio and/or video bridges, are appropriately sized.
The traffic patterns change as we move to a cloud-based solution because each client device (laptop, phone, or video system) will now connect to the external (Internet-based) cloud rather than to an internal data center.
If the current enterprise network strategy is to backhaul Internet traffic to core data centers and then jump out to the Internet, little change occurs to the network paths used by office-based clients when we migrate to a cloud-based solution. The Internet connection in the data center will see an increased load since the traffic that once terminated within the data center (e.g., bridge connections) now crosses out to the Internet. This link may need to be up-sized to accommodate the new demand.
However, if the current enterprise network strategy has more distributed Internet connectivity, the collaboration traffic makes its way to the nearest Internet access link and then jumps onto the Internet. This means regional, or even office-based Internet connections may need to be up-sized. Cloud migrations can remove a substantial amount of bandwidth from office-to-data center MPLS connections. Given the lower cost of Internet connections, this is often a net financial win.
For enterprises who have deployed an SD-WAN strategy (or plan to), this can be a symbiotic solution, since the SD-WAN can provide direct Internet access to each office. A pair of Internet connections into each office will provide both inter-office data center connectivity as well as direct Internet access, often with a dynamic allocation of bandwidth between the two demands depending on the changing load across the business day.
Quality of Service
For enterprises that still rely on an MPLS network to carry audio and video traffic, QoS is still in order, and a traditional strategy can be used. Note that to make this work correctly, the data center location that is receiving audio and video streams from the cloud-based collaboration provider must recognize and mark those streams as they enter the MPLS network to ensure traffic flowing from the cloud back towards offices over the MPLS network will be QoS protected.
Offices with a direct Internet connection can likely forgo QoS marking, as once the traffic is into the office, it will be supported on high-bandwidth LAN connections.
Capacity & Peak Loading
To properly size MPLS networks, we must find the peak demand periods, determine the number of concurrent users for audio, video, and content sharing streams, and then size the MPLS queues and overall links accordingly. This isn’t a change. However, we are now dependent on the cloud provider for the analytics to determine call patterns. Ensure the call data you receive from your cloud provider has enough information that you can determine the internal office engaged on each call. If your cloud provider can only tell you the IP address of the Internet access point, you won’t have enough information to find your peak periods and peak demand for individual office links.
Crossing the Internet
Internet traversal provides its own challenges not only with the bandwidth of the links but with managing the quality of the transport. We want to maintain relatively loss-free connections to the service provider cloud, so audio and video quality remain high. The Internet doesn’t give us the option of using QoS, so additional strategies are required.
One of the biggest challenges is the access link between the remote office and Internet core, as this is usually the lowest bandwidth link in the path between the user and cloud. The place QoS has its highest benefit in a transition from a high-bandwidth link to a low-bandwidth link.
Outbound traffic (e.g., leaving an office LAN and passing onto an Internet access link) can still be prioritized. Setting up a methodology for recognizing collaboration traffic (e.g., QoS marking or port/protocol identification) and then prioritizing this traffic outbound on the Internet link can ensure transmitted traffic reaches the Internet core with low loss.
Inbound traffic (from the cloud provider to the office) doesn’t have this option. Here we can use some other techniques, such as using a WAN optimizer to open up bandwidth space for the current collaboration traffic (by slowing down data traffic), or we can set up a separate Internet link that only supports audio and video (well-behaved, not “bursty”) traffic.
An SD-WAN implementation can also help here because it typically supports multiple Internet connections from each office to the cloud. Multiple paths can be dynamically optimized to ensure sufficient bandwidth, and low-loss paths are available for audio and video streams.
Another common point of packet loss is the peering points between Internet service providers (ISPs), where they hand-off traffic. If the enterprise has contracted with ISP X, but they aren't directly connected to your cloud service provider, traffic has to cross through a peering point to ISP Y, who is connected to the cloud service. This peering-point can be eliminated by determining which ISPs are directly connected to your cloud provider, and then contracting your Internet access with one of those ISPs.
It’s also possible in some cases to use a more direct connection or even a dedicated Metro-E or MPLS-type connection between your enterprise and the cloud-based provider. If maintaining very high-quality transport is a requirement for your company (for example, in high-end video conferencing rooms), or if Internet transport in a particular region is consistently poor, then a direct connection may provide the right answer for you.
Monitoring the Internet Paths
In any of these cases, analytics are a key component. Remember: ISPs are all innocent of packet loss until proven guilty. And that Internet access link is a no man’s land, where no one claims responsibility. Having a path-based testing methodology in place that constantly tests the paths between your Internet-connected facilities (offices or data centers) and the cloud provider is critically important to determine when transport issues are occurring, and in which portion of the network path. Without this information, you and your providers will just send tickets to each other, and no cure will be forthcoming.
Cloud-based providers will likely have multiple geographically distributed data centers that they are using to support the service. Chat with your collaboration service provider to find out all those locations (preferably a list of subnets in use) and also ask them how clients will be routed (only to the nearest one or directly to the location supporting the call bridge?). Each of the possible network paths must be a part of your testing methodology to ensure consistent high-quality calls.
One of the big questions for these kinds of migrations is: Where are your users located? A big advantage of a cloud-based collaboration service is users who are directly connected to the Internet (e.g., working from home, on the road, etc.) don’t need to use the corporate network at all. Connections will be routed directly to the cloud-based service.
For companies whose users are largely in the office, this will not have a big effect. For companies who support telecommuting or have a large user base that is naturally outside the company (e.g., customers, sales teams, service delivery teams, vendors, etc.), this will have a significant effect. I recently worked with a large consulting services firm and found that a full 80% of their calls were placed from direct Internet connections. This can substantially change the network bandwidth demands and resulting finances.
If your collaboration service provider supports large webinars on the same platform, some analysis is needed to show how they impact the bandwidth of your office connections.
For example, if a webinar will likely be attended by 100 employees in the Chicago office, and the solution is an extension of a standard video/audio solution, then 100 concurrent connections will be established between Chicago office users and the cloud during the webinar meeting. If these are video connections, they will incur between 500Kbps and 3Mbps each, causing a 50M to 300M demand on the Chicago office link during the webinar.
Most dedicated webinar solutions of the past provided a single stream to an office, where a stream replicator can be deployed in a WAN optimizer device or as a separate server. This type of solution alleviates this concurrent demand problem by replicating that common stream once it is inside the office where there is plenty of LAN bandwidth to support it.
Talk this use-case through with your cloud service provider if this is a planned part of your solution to ensure these high-demand scenarios will not fail due to insufficient bandwidth. Or plan to encourage users to connect from home or to convene as a group in one or more video conferencing room(s) in the office.
Clearly, having the network team involved in transition planning is critical, and the earlier these challenges can be discussed, planned for, and executed, the better. Network link bandwidth changes often have a long lead time. Having a new collaboration solution work seamlessly is critically important for user adoption, so getting it right the first time pays big dividends.