The first AI World Conference and Expo, an event specializing in artificial intelligence, kicked off in San Francisco in early November. This niche conference included demonstrations of language mapping and image tagging technologies, natural language processing applications, and, of course, bots.
Presentations and exhibits were not focused on the front-end applications such as speech recognition or connected devices that are typically discussed as part of enterprise communications. AI, the technology itself, is more about the back end. There is a tremendous synergy between artificial intelligence and speech recognition technology and the Internet of Things, but at AI World such subjects are viewed as the data collection endpoints. Just as our eyes, ears, and voice are not our brain, AI World was about technology that makes endpoints recognize, understand, speak, and think.
Some of the products demonstrated are still a work in progress. AI-driven applications such as bots tend to get the most publicity because they are easy to understand from a simple demonstration. However, a lot of value also comes from less flashy applications focused around cybersecurity, legal liability analysis, navigation, and supply chain improvements.
One session, led by Caroline Gabriel, research director and co-founder of Rethink Technology Research, focused on AI systems being deployed in the enterprise today and included highlights from her comprehensive study concerning enterprise usage of AI systems. The study included a survey of the industries identified as having the highest interest in deploying AI-enabled technologies, with a sample of less than 100 companies. Gabriel identified three major IT trends that enable AI to be used across a wide span of systems and applications: cloud computing, particularly from companies such as Amazon that offer the platform as a service; the Internet of Things (IoT), which will create so many data endpoints over fast networks that it would be impossible to analyze billions of connected objects without AI automated analysis; and miniaturization, which is the ability to fit the technology into a small chip to make AI accessible to the masses, even a consumer-level device.
Some of Gabriel's findings were especially surprising. Take retail businesses and factories, for example: 40% of those retail businesses involved in the survey have augmented or virtual reality planned for deployment on products that are commercially available today. Additionally, 20% of survey respondents were not clear on the AI value proposition, and 60% of enterprises stated that they lacked the skills to implement AI. However, in most industries 30% to 52% of companies indicated they are planning AI deployments for cybersecurity, virtual reality, speech recognition, or automated navigation by the end of 2017. Up to 80% of the companies had similar plans by the year 2020.
Most commonly, enterprises will embark on an AI implementation to improve an internal process, which tends to be followed by a second phase in which the technology is deployed to a customer-facing system, Gabriel said. This seems contrary to my experience with large enterprise projects, where I often see the high-ROI, customer-facing projects reign as the business driver, making incidental process improvement to internal systems a secondary beneficiary. Gabriel clarified that while the business driver may indeed be a customer-related project, few companies deploy these technologies to their clientele without a pilot deployment that tests the solution by fixing some internal problem.
One of the most prevalent trends in AI for the enterprise is personalization, which extends far beyond the ad customization we are all used to seeing on the Internet (e.g. You searched for socks, and now you'll see sock ads for the next 5 ½ years). Big data analytics provides flexibility in identifying patterns and relationships from unstructured data (i.e. information that is not categorized or standardized in data fields). For example, in the expo hall, Verve.ai demonstrated the concept of leveraging personas for marketing and customer relations in a B2B environment, where traditionally customer engagement strategies are based on vertical market and revenue. In other words, if you are a very large customer, you likely are already interacting with a dedicated account team, but you may also get massive amounts of surveys, emails, and invitations. This redundant customer engagement effort may be unnecessary, and may even have a negative impact on the client.
By analyzing customer behavior and segmenting by personas (for example: Price Fighters, Range Buyers, Delivery Buyers, Quality Fanatics, and Traditionalists; or in another example: Strategic, Engaged, Dormant, Reactive), a marketing and customer service strategy can be tailored to more effectively engage the business than predicting by revenues and verticals. Personas of individual people is an easy idea to understand, but it's a bit harder to understand how a whole company's persona could be identified. Verve.ai CEO Dalia Asterbadi explained that company personas are based on the individuals representing the decision makers. In most cases, the company persona will mirror that of its personnel. It turns out that just like consumer social media, there is a whole range of information that can be gleaned from commercial SaaS platforms to develop the persona.
The original goals of AI were targeted toward creating machines that could think at a human level. Since so many of the successes in AI have been based on narrow task domains, the term Artificial General Intelligence (AGI) was coined to differentiate from the original AI mission of a fully intelligent machine. Even with this distinction, the definition of what constitutes true thinking machine varies. The most well-known definition is that determined by the Turing test, which tests the ability for a computer to hold a text-based, unrestricted conversation with its human counterpart while being indistinguishable as a machine. There are some successes in this test, to the extent that harder benchmarks for measuring AGI have been devised (e.g. the Coffee Test, Robot College Student Test and the Employment Test).
It can be frustrating that an AI-driven system (bot, speech, or other interface) is limited to a specific set of tasks. However, expert systems focused on a narrow "task domain" are far more productive and accurate than AGI systems. Humans are much more adept at recognizing a change in context than machines.
Intraspexion, an NLP application designed to prevent corporate discrimination or harassment lawsuits, is a good example of narrowly focused, supervised learning being used to solve specific problems. Case law related to workplace discrimination and harassment lawsuits is used as the training data, to create an ontology, a very specific set of data types and relationships related to the case law. The system is set up to scan corporate emails for communications that may indicate a risk for lawsuits (e.g. "We have enough women in that department" and many more uncomfortable examples). It's important to note that NLP is not just a keyword or synonym search, since terminology in the case law will be different from language used in employee communications. The system searches for matching context. This distinction becomes more important as the technology is applied to even more casual forms of communication such as instant messaging.
Software company Kimera Systems was also exhibiting at AI World, showcasing Nigel, advertised as the first commercially deployable AGI technology, not programmed for any specific functions, capable of determining required functions through the unsupervised observation of sensor data. In August, Kimera's beta users were instructed to go to the movies with their Nigel-enabled smartphones. With only some guidance provided to beta testers, Nigel could observe that a movie theater is a type of location, and that people share common behaviors with respect to their phones when they visit this type of location. Nigel learned to proactively dim screens and silence smartphones when people enter the theater. The test was simple, but Nigel is designed to become more contextually aware as its unsupervised learning algorithm accumulates knowledge. Kimera is in advanced talks with Tier 1 mobile network operators and handset manufacturers to integrate Nigel at the operating system level, and will also be deploying Nigel in conjunction with undisclosed app partners in 2017.
In the enterprise communications industry, there is a lot of optimism on the disruptive potential of unsupervised learning and AGI. Even if a true thinking machine is far in the future, the more contextually aware the back-end technology is, the better it will serve the enterprise in automated assistants and bots that can answer more than basic questions.
You have to wonder if some companies could get the same competitive edge just by staffing up, or whether using AI technology to make sure a real human has all the information they need to service a customer is the ideal scenario. Maybe we will still have jobs for a little while longer after all.