This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
What Will Our Machines Talk About?
Suppose that instead of "my people calling your people" our machines did call each other instead. What would they talk about? In my last blog, I talked about the possible impact of automation on human collaboration, how machine collaboration might evolve and replace human conversations. If this did happen, what might our machines be talking about and how much should we expect to have to listen in on their conversations, just to be safe?
The vision of robots running amok has figured in movies for decades, and it's hard to talk about autonomous machines without running into that frightening outcome. It's not as silly a notion as it seems, either, even though we're a long way from that kind of machine intelligence. The problem that increased use of autonomous, closed-loop systems that we now think of as the ultimate form of artificial intelligence is that even early steps can be scary. Self-driving cars could self-crash, and network intelligence could end up creating an outcome where fair distribution of capacity means nobody gets any of it.
I remember a '60s-era magazine cartoon that showed a programmer coming home, tossing the briefcase on the sofa, and telling the spouse, "I made a mistake today that would have taken a thousand mathematicians a hundred years to make!" That's the frightening part of automation; closed-loop systems can do the wrong thing at lightning speed, faster than we could counter it. That's why we have to worry about what our future machines might say when they talk.
Closed-loop systems couple an event to a response, as opposed to the open-loop systems where a human interprets conditions (events) and activates scripts or processes (responses). Closed-loop systems have two basic requirements, even if they're not always explicitly acknowledged. The first is that conditions must be monitored as events drive responses, to ensure that things aren't getting worse instead of better. The second is that there must be a set of conditions that, if recognized, activate a human review of automated processes. These form the future of collaboration.
Both these things have to spring from a broader notion of what a closed-loop system really is. It's not just moving the door to the right place on the assembly line as the car body moves by, it's also signaling when somebody is in the way or when something holds up the line. True feedback loops, the basis for closed-loop automation, should provide both a set of policies on what's "progress" versus "disorder" and a means of assessing conditions that doesn't rely on the same set of sensors or analyses that initiated the closed-loop process to begin with. If things look confusing, a closed-loop system has to call a timeout.
Our automated system is really a contextual framework, a set of policies that decide what's "happening" based on interpretation of "conditions" the system senses. Human conversations are also contextual, of course. "Yes" is a good answer only if you know the question. Sensor information, location information, time of day, speed of motion, and all this other stuff is just data if it can't be put into a contextual framework. Machine collaboration, like human collaboration, has to be cooperation in a shared mission, and the framework sets that mission, not the machines themselves.
The big advance in collaboration, the whole framework for collaboration in an automated system, is the sharing and updating of that contextual framework. Every worker in a business has an independent context, and in traditional "collaboration" what comes together are those contexts, not the workers. If we move to an automated system, it's not really our machines that are talking at all, with each other or with us, it's the contexts in which they operate. So collaborative communications links contextual systems, which means we need to think about what linking such systems takes, even when no humans are involved.
What's the Intent?
Closed-loop systems, which is what a contextual framework is in abstract, can be represented by what's called an "intent model." The model presents external behaviors and hides what's within, how those behaviors are derived. A model recognizes events and also generates them to other models, and each event is a message that might be a request, a response, or a notification. Machine collaboration is all about event exchange.
Machine communications is also not really, directly, about machines. In nearly all cases, the events that drive our automated systems will be created by processes that correlate and analyze conditions, not by sensors and gadgets. You'd bury automated systems in garbage if you made them interact with every IoT sensor just to find what they need, and the system "next door" would be wasting time doing the same thing. Events will be pre-processed into a kind of digested insight form, and these will be spread around.
Spread even to workers. Human collaboration, with other humans and with machines, will end up taking the form of making humans into intent models, with specific events they're receptive to and specific missions they're supposed to be supporting. Mobile devices and desktop computers will be windows through which the workers collaborate with the entire, complex, man-machine future. Even human collaboration will be mediated through event-filtering, context-aware agent processes.
Don't be worried about machines talking about you behind your back. In no time, we'll be speaking the same language they are.
Learn more about artificial intelligence at Enterprise Connect 2018, March 12 to 15, in Orlando, Fla. Register now using the code NOJITTER to save an additional $200 off the Advance Rate or get a free Expo Plus pass.
Follow Tom Nolle on Google+!
Tom Nolle on Google+