This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
No Jitter Roll: Google CCAI Update, Latest Meeting Tech
This week we bring you news on an updated contact center AI solution, a line of audio devices for meeting rooms, an integrated compliance recording solution, and automated machine learning for speech recognition.
Google Cloud Updates Contact Center AI
At Google Next OnAir earlier this week, Google Cloud revealed a trio of new capabilities for Contact Center AI (CCAI), its conversational AI solution for contact center partners (read related No Jitter post, “10 Things to Know About Google Contact Center AI” from Antony Passemard, head of product and strategy for conversational AI at Google Cloud).
For Dialogflow, a development suite for building conversational AI, Google Cloud has introduced the latest version, Dialogflow CX. With this update, Google Cloud said it has optimized Dialogflow for use by large contact centers that deal with “complex (multi-turn) conversations and that are “truly omnichannel.” By that, Google Cloud means you can build the conversational AI once for deployment across any channel. Dialogflow CX, which features a new visual builder, is available in beta, Google Cloud said. (For a deep dive on Dialogflow, read our 12-part “Decoding Dialogflow” series by consultant Brent Kelly, principal at KelCor.)
For Agent Assist, which identifies customer intent, provides guided assistance in real-time, and automates call dispositions, Google Cloud has extended beyond voice calls with a new chat module. Additionally, Google Cloud has tuned Agent Assist for use in regulated industries as well as to surface latest discount information, deals, and special offers, as described in the blog.
Lastly, Google Cloud introduced Custom Voice, for use with CCAI as well as the company’s Text-to-Speech API. With Custom Voice, available in beta, brands can create a unique voice for recognition across touchpoints, Google Cloud said.
Yamaha Announces Line of Audio Devices
Yamaha this week unveiled its ADECIA audio line for meeting spaces, designed to be a part of a complete conferencing solution that includes its Power-over-Ethernet (PoE) network switches and VXL Series Dante PoE line array speakers. The new audio components are a ceiling array microphone called the RM-CG and the RM-CR, a remote conference processor. With the system, enterprises will be able to support human voice detection, noise reduction, speaker tracking, echo cancellation, and multibeam tracking within their meeting spaces, Yamaha said. Additionally, the new ceiling array microphone and remote conference processor are compatible with other Yamaha or third-party components.
The ADECIA products will be available in early 2021; pricing details will follow at a later time, Yamaha said in a briefing about the announcement.
Numonix, Ribbon Partner on Compliance Recording
Recording solutions provider Numonix and real-time network solutions provider Ribbon Communications this week announced they have partnered to deliver Microsoft Teams compliance recording. The interaction recording solution, which combines Numonix’s IXCloud and Ribbon’s Session Border Controller Software Edition (SBC SWe) Lite, allows users to record audio, video, and screen sharing natively and supports Teams Direct Routing, according to the companies. Additionally, Numanix and Ribbon offer their respective products as fully managed software from the Microsoft Azure cloud. Enterprises can store data in one of 15 data centers to support data sovereignty, the companies said.
Deepgram Recognizes AutoML for ASR
Recently recognized as one of seven speech tech vendors “pushing ahead in the enterprise,” automatic speech recognition (ASR) startup Deepgram late last month added an automated machine learning (AutoML) training capability to its portfolio.
With AutoML, data scientists and others implementing speech recognition can reduce the amount of hand-tuning they must undertake with their modeling, as Deepgram described in a blog introducing the new training capability. Though AutoML has come into play previously for natural language processing, image, and vision, Deepgram lays claim to being the first to apply it to ASR. AutoML removes the need for manual selection of input audio features, maintenance of custom vocabulary lists, and modification of underlying algorithms or architectures, among other tuning exercises, Deepgram said.
While Deepgram offers free base modeling, AutoML is available with paid accounts only.
Beth Schultz, No Jitter editor, contributed to this article.