What the 'Cloud Model' Means for Enterprise Tech
As cloud providers evolve toward 'premises cloud' hosting, the old model of networking isn't just irrelevant, it's impossible.
Everyone has heard the story that public cloud services will totally destroy enterprise IT, leaving data centers vast, presumably cooled, wastelands. Everyone except perhaps the cloud enthusiasts also know that's certainly not true today, nor is it likely to be true even in 10 years. Still, public cloud services will wreak some massive changes in networking and IT, and it won't take a decade for those changes to show up in planning and budgets. They'll not be exactly what you expect, either.
Anyone who looks closely at cloud services developments over the last year will see that we're redefining what the cloud is. It's not a replacement data center, or even a separate one, but a new IT model that subsumes all hosting into a common pool of resources and distributes applications across the pool based on performance and pricing policies. Want an easy way to visualize the "cloud of the future?" Think of it as a giant virtual computer, one that absorbs all applications and resources and starts just inside the network connection of every user.
Networks No More?
The first impact of this giant-computer model, since it starts where the users connect, is that it absorbs all of the network. What's a company VPN for if not to connect its users to IT resources? Well, in our giant-computer model, there's nothing to connect. In a cloud-modeled enterprise, all of the resources are, virtually speaking, right at the user's edge and all the network connections are either access connections to the cloud or connections within it.
This model will develop for two reasons. First, as worker empowerment shifts to exploit mobile broadband, public Internet connections will create the last mile to users. Second, cloud providers will gradually begin to work with network operators to first connect with VPNs and second to interconnect themselves across high-QoS trunks. This will gradually shift enterprise "VPN" service to being a connection to this multicloud "connection bus." In effect, the result is a virtual-network-operator (VNO) model for wireline networks, with the cloud providers being the virtual operators for future services.
The second impact of the giant-computer model of the cloud is the redistribution of IT assets to remote offices. Managing applications that run on remote servers is a problem; managing cloud resources that happen to be on your premises in branch offices is less problematic -- it's all done remotely by design. In any event, by 2020 all the major cloud providers will be offering "premises cloud" hosting on servers they own and manage.
The drivers for this are already visible in Microsoft's Azure Stack and Amazon's Greengrass, both of which let users run cloud applications on their own devices. From that, it's a small step to both expand the scope of what can be run and to offer the premises devices as managed elements. Then there's IoT, which requires shorter control paths to insure event responses are timely in driving process automation, and thus favors locating some processing capabilities right where the process automation is being done.
On-Prem Outsourced Servers
This isn't going to be a reversal of server consolidation, just a transformation of the goals of server consolidation to a virtual plane. Servers, as virtual resources, can be managed as efficiently in one place as another if you discount the process of physical repair. But if you don't need to rush a tech out to fix something because you can shift processes to another resource temporarily, then you don't really need to worry about the fixing dimension -- send a replacement and rely on plug-and-play to get it commissioned. Of course, this facilitates treating remote servers as cloud provider resources, outsourced on premises but part of the cloud.
Where are the applications in all of this? That's the third point of cloud-model revolution. Even today "I don't know!" or "Who cares?" is the answer, because in the cloud you run on virtual resources that could be almost anywhere. That trend is going to accelerate, partly because of further cloud adoption but more because we're going to virtualize the applications themselves.
Features on the Fly
Everyone in the cloud space is looking at functional programming, which is based on nubbins of code pushed out along workflows toward the user, to a point where control loops are short but hosting economies are still good. It will take time, but functional programming concepts will break down the whole notion of cohesive applications, replacing it with a model where features are almost migratory, shifting as needed to new points and multiplying as needed to address changes in demand.
You don't really have "hosts" in a functional future, and you don't pay for them in the cloud. You pay for what you run, right down to the function/feature level. It's the ultimate in elasticity, and it is also the final step in virtualization. With functional programming the whole notion of servers and VMs and containers becomes invisible. The machine in the cloud is all there is.
In the past, networks defined workflows because they carried them. In the future, both work and processing migrate around in the cloud to find the best relationships, and networks simply support the extemporaneous work/process relationships and carry those defining workflows. The old model of networking isn't just irrelevant, it's impossible, and this future is already taking shape. In a year or two everyone will see it, and the smart vendors, cloud providers, and businesses will be exploiting it to secure a long-term competitive advantage.
Follow Tom Nolle on Google+!
Tom Nolle on Google+