In my previous No Jitter post, "Why Vonage's TokBox Buy Is a Big Deal," I discussed the challenges for carriers in the emerging IP-for-everything world of delivering services. One of my key points is that the scale limitations of access carrier footprints, coupled with the access carriers' inability to sell outside of those footprints, make competing against over-the-top (OTT) service providers, which face no such gating factors, a significant challenge.
With this in mind, I had an extensive conversation about edge computing with Jeff Sharpe, director, product strategy, network & communications solutions, for Adlink Technology. What interested me most was how edge computing can transform service value for carriers by moving processing and intelligence out into the network and to the cell tower or the compute edge (typically a central office location).
The discussion led to an interesting insight. An access carrier can create strong value for customers if it can provide services that benefit from residing in the network rather than in a large (read Amazon, Google, Microsoft) cloud data center. In the discussion with Jeff, two critical issues justify putting service intelligence in the network or at the edge of the network, rather than just carrying the data to a large OTT data center for processing.
- Latency -- Data moving through a network incurs latency due to both the network and the distance traveled. Having to carry data traffic all the way to a remotely located data center may introduce too much latency for some applications. Some examples that come to mind are vehicle-to-vehicle traffic for self-driving cars, data traffic for coordination in automation applications, or other activities for which the time domain is low. Innumerable applications -- existing and emerging --could benefit from the low latency an access carrier service could provide. The key point is that by moving intelligence out to a cell tower, access carriers can deliver latency response times as low as a few milliseconds -- and that's generally not achievable with the transit time required to a major data center.
- Bandwidth -- If processing at the edge versus in the core significantly reduces bandwidth that must be transported, then an access carrier can justify delivery of an edge service. The key here is that the cost of the compute resources is less than the cost of carrying the traffic and that the pre-processed data doesn't have significant retention value. Video stream analysis is a good example. Consider a city intersection that has, say, four to eight traffic light cameras. Using local processing, the city could analyze the video streams for information such as the number of cars, bicycles, and pedestrians that go through the intersection on each light cycle. This eliminates the need to carry multiple video streams through the network and back to a central facility to do the same analysis.
In talking with Jeff, I learned that such capabilities give access carriers the opportunity to deliver new services that their OTT competitors can't offer. The inclusion of in-network/edge processing offers a differentiated value proposition for carriers operating access networks. They must focus on applications and use cases for which having compute resources at the edge and through the access network provide significant advantages. If access carriers want to avoid being "pipe" vendors, this seems to be a clear way to add new functionality.
It'll be interesting to see how the access carriers and Adlink, which offers platform and embedded edge compute capabilities to a range of industries, define the edge processing capability as well as the services that can take advantage of it. For any access carrier contemplating an OTT future and wishing to continue to sell services, a discussion with Adlink or another edge compute company should be of high priority. Developing a strategy to deliver services that benefit from edge processing seems to be the only logical way to create value in the coming IP-everything and OTT world.