This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
In the recently published Gartner Maverick Research, "The Edge Will Eat the Cloud," Thomas Bittman, said: "The growth of the Internet of Things and the upcoming trend toward more immersive and interactive user interfaces will flip the center of gravity of data production and computing away from central data centers and out to the edge."
In a hyper-converged digital world, milliseconds matter. People using virtual reality and self-driving cars require near instantaneous data, computing, and connectivity. The network latency required for the next generation of applications will shrink down to five milliseconds.
The Bane of Backhauling
This Cloud 2.0 architecture will force a totally new network and security architecture. In today's enterprise networks, backhauling traffic to a central data center is common. Enterprises centrally deploy and manage virtual private networks (VPNs), mobile device management, proxies, and private/public internetworking. They backhaul traffic since putting all the network security controls on the edge is too expensive. With highly distributed data centers in Cloud 2.0, security will also become more distributed.
The cost of putting networking and security software at the edge of networks is dramatically falling as startups bring new approaches to the market. Instead of a network architecture of edge, distribution, and core, where all the routing intelligence and security controls are at the distribution layer, future network architectures will have an intelligent edge running on any commodity hardware/VM with no distribution layer. The core in this type of architecture is a superfast IP forwarding plane.
Today's software-defined WAN (SD-WAN) implementations perpetuate backhauling versus connecting users directly to the services they're consuming. SD-WANs use pre-established (static) tunnels that add a 30% bandwidth overhead. In many cases, one of the tunnels goes to Zscaler or other cloud security stack. While the Zscaler platform is in more than 80 data centers worldwide, it still forces backhauling, which is the bane of network architecture when every millisecond counts.
The reason users experience slower cloud application performance in the office than they do at home is because of all the backhauling that occurs in the corporate network today. In one example, a contractor found downloading a 100-megabyte file from Office 365 SharePoint took two minutes and 10 seconds in the office but only 13 seconds at home.
From his office in Seattle, the contractor would log on to the VPN to gain access to the guest Wi-Fi network, which had an IPsec tunnel to the Wi-Fi controller. The tunnel ran over an SD-WAN, through a data center 1,000 miles away, and then back out to the hosted O365 site in Seattle. The network latency at home was 8ms to O365, while in the office it was 52ms for small packets and 77ms for large packets. The large packets were fragmented and also delayed by the encryption process of running through three different IPsec tunnels (VPN, Wi-Fi, and SD-WAN) even though the contractor had a secure TLS connection to O365.
MPLS Fading Away
Is the future of WAN no WAN? In a cloud, mobile, digital world, where every millisecond counts, the Internet is the best network for connecting users to cloud applications. Private internets interconnect services and applications. Tunnels, including MPLS and IPsec, are static and add latency through backhauling. And, when encryption is handled at the application layer, tunnels add in unnecessary encryption. According to Google statistics, 71 of the top 100 websites use HTTPS by default, up from 37 a year ago.
The promise of MPLS is QoS and security, but with application encryption, MPLS becomes irrelevant. It creates a more expensive and higher latency network than modern approaches. MPLS points of presence (PoPs) are rarely co-located with cloud hosting and caching. The number of MPLS PoPs isn't growing, while the number of Internet peering points co-located with content distribution networks and local cloud hosting sites continues to expand dramatically, pushing content and computing closer to the Internet user. MPLS will fade away, joining ATM and frame relay in the networking graveyard.
Also in Cloud 2.0, expect the cloud providers to get into the networking market so they can ensure the performance and security of their applications all the way to the user. As every millisecond counts, cloud providers will rely on over-the-top (OTT) delivery using Internet transport to connect with their users and build private internetworks for their internal use.
From the keynote stage to the show floor to the conference sessions, we’ll hear about the progress that vendors and enterprises are making towards implementing generative AI.
Join eleven of our experienced independent analysts, consultants, and thought leaders as they tackle everything from unified communications to customer experience to artificial intelligence.