No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Converging on a Cloud Vision?

Revolutions never come easily, and in fact most of the things we call "revolutions" really come through compromise. I realized when reviewing some of the results of our enterprise survey that compromise might in fact be the key to the future of the cloud. It's hard to see the shape of this compromise now, amid the full flood of cloud hype, but if you talk with enterprises who have done serious cloud planning, you can see a glimmer of the future.

According to the cloud-leading planners, most companies start off with two fairly polarized sets of cloud visions. IT professionals tend to be thinking about creating a more efficient IT environment by extending the principle of virtualization, and also about off-loading peak demand or providing application backup. Line organizations want to be able to deploy applications quickly, add or change the resources committed to run them extemporaneously, and reduce what they see as a burdensome cost of sustaining internal IT resources. To a degree this reflects the classic tension between IT and the line groups within a company.

The problem it poses for cloud planning is that the two goals are nearly impossible to reconcile, and something as significant as the cloud needs to have broad buy-in. Many of the early cloud adopters have launched very limited projects because they can't get higher-level approval to do more, and some line organizations have jumped out of internal IT into SaaS, only to find that it's not meeting their goals either.

The few companies who have managed to get these two groups together have done so by adopting a vision that's almost as easy to understand as "the cloud". It's "plug-and-play apps". The idea here is to build IT processes from applications that are readily installed and run either in the public cloud or in the company's data center. This vision gives both line and IT organizations something to work toward, and also offers support for the combined goal at the senior management level--enough to get things moving and keep up the momentum.

In the plug-and-play-apps vision, the first step is for an IT organization to create a technical description of the application requirements for plug-and-play operation. The key point is that the applications have to be self-provisioning, meaning that the process of installing them to run has to be fully automated. Second, the apps have to be self-integrating, meaning that the same kind of provisioning automation used to assign the apps to public cloud or private IT resources has to connect them into a cohesive set of tools for workers to use, with data and inter-process dependencies and addressing all satisfied.

The interesting thing about this is that it's a shift from the current vision of the cloud: from a means of migrating IT to public hosting, to an application architecture. Many would argue that this vision is something like SOA of old, and they're right in that it would be easier to fulfill it using SOA principles at the application level. But SOA isn't enough to create this; you need automated provisioning and integration tools that go beyond the SOA model used today. Even the "DevOps" work that unifies application development and operations with the goal of facilitating automated provisioning isn't enough. None of these projects seems to have any current interest in the application integration angle, for example, and without that, the pieces of the plug-and-play IT infrastructure built in the cloud wouldn't create a cohesive company IT position.

The interesting thing about this plug-and-play-apps view is that it could also be a better way of looking at IT, or any process based on communications and software, including UC. Imagine the notion of collaboration and communication built on modular tools assembled by the user on demand and integrated into applications automatically. You don't care where they run, you don’t "standardize" on anything other than the information model that has to be integrated with other models for other apps. Data-driven networking, computing, cooperation, and productivity. Sounds good to me.

The question is whether it's going to sound good to the vendors who could provide it. Modularity and plug-and-play have been embraced within discrete software packages, but not very well across package and vendor boundaries. If you're an incumbent you don't want to create a structure that lets some competitor integrate into your own hard-won accounts. Thus, it may be that the current open-source cloud-stack work (OpenStack, CloudStack, etc) will be the vehicle for this new model to get promoted. In OpenStack, for example, there are projects that could probably address nearly all of the plug-and-play-apps requirements with some expansion in their scope. If an open-source project offered all these benefits, then commercial vendors would be forced to follow suit or risk losing their customers to software that's not only more flexible, it's free.

Planning for all of this isn't going to be easy though, as the enterprises who can now see this vision as their ultimate application goal already know. You can break down the vision into steps, into requirements for internal IT, cloud services, integration and provisioning tools. The problem is that these pieces still don't know they're part of the grand plug-and-play puzzle, and that means a lot of work integrating elements. Enterprises have little experience doing something like this, and they tell me integrators aren't pitching this angle yet. So while this future may be logical and wonderful, it's still just a hair beyond our grasp. Enterprises agree, but they think that 2013 will be the time when all this begins to appear. I hope they're right.