In my last blog, I talked about how collaboration depends not on communication but on context, and postulated that the problem with UC/UCC is that we're taking a tool-driven rather than a value-driven approach to helping workers do their jobs better. It's fair to ask whether traditional communications vendors should now fall on their swords, given their linkage to a wrong approach, or whether there's something constructive they could do. Well, maybe there is.
We could dissect a truly useful collaborative experience as a sequence of interactions built around a shared context. "Do you see this?" followed by the partner "seeing" it, followed by a response like "Oh, that's the wrong widget, you need to look more to the right." In this we can see an illustration of the way in which communication could be integrated with point-of-activity empowerment of workers. The question is whether we can make it happen technically, which has to start by asking what "context" is from a technical perspective.
Strictly defined, context is "the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed." To share those circumstances with another, you'd have to reconstruct at least the relevant aspects of those "circumstances" and recreate enough of the "setting" so that the collaborating parties could cooperate effectively. So what are those relevant aspects? There are three of them.
First, we'd have to understand the information context of the worker. Decades ago, I used to use the term "jobspace" to refer to the sum of information that surrounds a worker in support of a particular activity. Jobspace is created by applications and it's primarily a view of data or the result of a process. There's nothing about jobspace that a UC/UCC vendor would contribute.
The second thing is visual context, meaning what the worker is seeing. Sight is by far the dominant sense, offering the richest interpretation of our physical surroundings. We have to reflect sight in collaborative context, but remember that our goal is to define the setting of the worker, which means we have to combine sight with location, orientation, etc. Saying the worker sees a wall isn't very helpful, but saying they see a wall from this specific position, facing north, likely tells you what the wall is.
The third thing is mission context. What is the worker doing, or trying to do? You can't help someone if you don't understand their goal. This context is particularly important, or should be, for UC/UCC vendors aspiring to be context-relevant, because mission context is easily conveyed through communication, and is much more difficult to interpret at the machine level.
OK, let's assume that we have two people trying to collaborate and that we accept these relevant aspects of a setting as defining the context they'd like to share. How do they share it?
We live in an age of "orchestration", meaning that many of our revolutionary new concepts, including cloud deployment and SDN, involve marshalling resources to a task based on some kind of model of the desired outcome. This collaborative model would define the specific three relevant context aspects of the worker in terms of a collection of software actions. Want information context? You replicate the query the worker generated, or you share the worker's screen. Visual context? You look at what the camera on the worker's phone or tablet is showing. Mission context? You could view a work order if there was one, but you could also open a connection so you could ask the worker what they're trying to do. The notion of a collaborative model would be to set up all this context automatically when collaboration was needed, rather than expect the worker to orchestrate it on their own.
If you want to talk to somebody you send them an IM or call them. If you want to collaborate with them, might you not send them a collaborative model? Think of it as a kind of orchestration script that sets up connections (audio, video), calls up information, reads location and orientation and maps it...you get the picture. It might be represented visually as a popup on a screen the same way an incoming call would be represented, but if it's "taken," i.e., accepted by the called party, then the orchestration script runs and establishes the collaborative model for the called party, so that party is now sharing context with the caller.
If you're a UC/UCC vendor or even a provider of UCaaS, you can't expect to provide all of the applications, location-based services (LBS) elements, devices, databases, and so forth that might be involved in our shared context; but you could provide for the exchange and execution of collaborative models. It's not a major step from sending an IM to sending a data element representing collaborative context orchestration so that the recipient can execute it and become fully and functionally linked with the initiating worker. Furthermore, traditional calling (voice and video), whiteboard, and even presence management could be made into subsets of this, so the orchestrated-context model could overlay current communications services.
Many of the major UC/UCC vendors had a lock on business voice communications, and they'd like a lock on UC/UCC. Given the real nature of collaboration, such a lock would have to come not by doing everything that collaborating workers do, but by orchestrating the context of the job for the workers involved in the collaboration. This is a logical, manageable task, and it may represent the best hope for relevance for the major players.
Follow Tom Nolle on Google+!
Tom Nolle on Google+