No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Deep Thinking about AI in Communications

Artificial Intelligence (AI) is one of the latest technologies seen as a mechanism for helping make communications better. The rising interest in bots, personal digital assistants -- Siri (Apple), Cortana (Microsoft), Alexa (Amazon), Monica (Cisco), Assistant (Google) -- sentiment analysis, and IBM Watson-based real-time advice and task handling all portend a future in which computer-facilitated communication is and will more pervasively become part of our daily lives and personal workflow routines. But what is AI, and does it really differ from other types of programming?

I am delighted that Eric Krapf, GM of Enterprise Connect, has invited me to co-moderate a session on AI and its emerging importance in communications at Enterprise Connect 2017 in Orlando this March. In this Summit on the Lawn session, "Cognitive & Contextual -- Can AI Disrupt Enterprise Collaboration?," we will discuss this topic and its potential impact with those in the industry who are making this technology a reality.

Thin Dividing Line
So, what makes a software program artificially intelligent anyway, as compared to a non-intelligent program? Well, the truth of the matter is that even AI experts have difficulty clearly articulating this difference. In his book Artificial Intelligence in Basic, author Mike James states, "There is a very thin dividing line between clever programming and artificial intelligence. Indeed, it is possible that there is no such thing as an intelligent program -- just clever programs that become increasingly clever."

Some go on to use the Turing test as a way to distinguish intelligent programs from their non-intelligent cousins. According to this test, a program is considered intelligent if it "exhibits intelligent behavior equivalent to, or indistinguishable from, that of a human." Numerous writers have written about Turing's test, arguing about its validity as an indicator of machine intelligence.

From Experience Comes Wisdom
My own take comes from having created a sophisticated "AI" program as part of my Ph.D. dissertation. In that work, I focused on knowledge-based systems that could recommend which mathematical simulations could best represent the physical properties (density, vapor pressure, triple point, etc.) of hydrocarbon mixtures at various temperatures, pressures, and compositions.

This system took inputs in the form a chat bot would use today, accepting natural language in typed form. It would parse the input strings, figure out which components were in the hydrocarbon mixture and at what compositions, and then use a complicated set of heuristic rules to choose one or more potentially applicable simulation models.

The system assigned a confidence level to each model so that researchers could know which of these had the highest probability of estimating these physical properties most accurately. It also told why a particular model could work with a given mixture and, just as important, why certain other models would not work well.

An example of how an AI system can explain how it reached its conclusions

It then crafted a specially formatted input file to a sophisticated numerical simulator that executed one or more of the appropriate mathematical models. The simulator would return these data to the knowledge-based system, which would then display the fluid's properties in a graphical fashion on a high-resolution display.

Sample hydrocarbon mixture physical property estimation based on models selected by an AI system

My second foray into AI was the creation of an expert system for Schlumberger's oil field services group. Schlumberger used this system for designing cement slurries needed to hold oil well metal casing in place. Here the input was geological formation data, including temperatures, pressures, the presence of salt zones, gas zones, etc. The output was a list of chemicals to add to these cement slurries so they would best hold the casing in place while keeping fluids from different geological zones from mixing.

This system used 14 different knowledge bases along with a localized relational database. The system could "learn" by applying local preferences and compositions while getting better and better results over time. It also had a "why" mechanism for stating why or how it came up with the recommended formulations.

The preceding two examples show AI systems comprising specially formulated knowledge bases, parsing engines, relational databases, and simulators. When you break down the way an AI system works into functional groupings, you might look at it and say, "That's not AI, it's just clever programming."

"Siri, Let's Use You as an Example"
Let's look at how a vocal bot, like Siri, works, since some people consider Siri to be AI. We will examine Siri using "functional dissection."

At a high level, Siri works as follows: We ask Siri a question. Siri tries to figure out what we said, decomposes it into words, does some kind of a data look-up and returns either a vocal response or a data response. Here's what happens under the covers (at a very high level).

1. Siri accepts a text or voice request.

2. The text then goes through a natural language processor to tag parts of speech (nouns, verbs, etc.), dependencies, and so forth. The result is called "parsed text."

3. The parsed text is then further processed to determine the person's intent along with any commands or actions mentioned. This step is facilitated by statistical data showing common queries and by trying to find a query that matches the user's request.

4. Using the intent and/or the commands/actions, Siri mashes up the data with third-party Web services or with the device's specific services. For example,

5. Once Siri has a response from the device or Web service, it translates the response back to natural language text.

6. Siri then converts the natural language text to synthesized speech along with any accompanying results.

Unlike some bot programs, Siri can actually learn, in the sense that over time it better "understands" a particular person's speech, and it can recognize the types of information a particular user may request most often.

So, after breaking down how a voice bot works, is it AI or clever programming? It seems to me that once an artificial application is dissected, it becomes less about machine intelligence and more about good programming, databases, and third-party application interfaces.

Along Comes Watson
Perhaps a more complex example of an AI system is IBM's Watson. This is the system Cisco hopes to integrate with Spark to enable "real-time advice and handling tasks." In the article titled, "The AI Behind Watson – The Technical Article," members of the Watson team describe how they programmed Watson for the Jeopardy TV game show. They used a concept called DeepQA, which is a "massively parallel probabilistic evidence-based architecture" used for "analyzing natural language, identifying sources, finding and generating hypotheses, finding and scoring evidence, and merging and ranking hypotheses."

The IBM Watson architecture used in the Jeopardy game show (source: IBM)

Watson was amazing because it beat two human Jeopardy champions. IBM is now applying Watson's smarts -- natural language processing; deep question analysis capabilities; inference engine; scoring and ranking mechanism; and its ability to merge hypotheses into a response -- to a number of different situations in healthcare, finance, R&D, travel, retail, and security.

Communicating More Intelligently
So, what can Watson, Siri, and other AI-based applications do in the communications world? We only need to use our imaginations to find interesting ways to use these technologies.

Although bots are not new, they seem to be emerging among communications vendors as the technology du jour that will help people query intelligent systems and get answers. AI technology can also be used to significantly enhance sentiment and understanding analysis. Today, we use speech -- words, inflections, tone, volume -- to get some sense of how a customer is feeling. Imagine a world in which an AI machine can participate in a business meeting, pick out the most salient discussion points, automatically summarize these, put assignments into a task database, and, most importantly, do facial analysis to get a sense of team member understanding, buy in, and commitment! These last three items are those for which video is used extensively in the enterprise space. Perhaps a "video bot" can help us ascertain how well a meeting really went!

My own experience with AI and automation suggests that artificial intelligence will not replace people; it simply makes people more efficient at the work they do. I'm looking forward to new AI-based developments for communications and collaboration, and believe that when used properly, these advanced technologies can really help us do our work better and faster.

As to the question of whether it is really AI or clever programming -- well, you'll have to draw your own conclusions. And, don't forget to join Eric and me at Enterprise Connect 2017, where we'll be doing a deep dive on the topic of AI in communications.

Learn more about AI and communications at Enterprise Connect 2017, March 27 to 30, in Orlando, Fla. View the Conference Overview, and register now using the code NOJITTER to receive $300 off an Entire Event pass or a free Expo Plus pass.