No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

3 Reasons Speech Is Becoming Preferred for Self-Service

For decades, service managers have understood the benefits of deploying self-service solutions for customer care across multiple service channels. What’s not to like? Self-service solutions give businesses a way to reduce the cost of handling support calls, avoid fines and penalties resulting from lack of compliance, and meet the demands of customers who now expect service anytime and anywhere they want it.

Now, as Donna Fluss, president of DMG Consulting, wrote in a blog post earlier this year, “A remarkable thing is happening in the realm of customer service: After years of rejecting self-service, customers are changing their tune. Consumers of all ages are showing a preference for self-service solutions over talking to agents or using chat boxes, provided they do their jobs well.”

The realization that using a natural language speech interface can improve the speed, effectiveness, and experience of the self-service interaction isn’t new, either.

However, a new generation of cloud-based speech technologies coupled with a new breed of application development tools is making it easy for service organizations of all sizes to build, package, and deploy self-service apps that harness the power of the latest innovations in speech and natural language processing (NLP).

That’s driving a wave of adoption for virtual agents. Gartner has revealed that 25% of customer service and support operations will integrate virtual customer assistant (VCA) or chatbot technology across engagement channels by 2020, up from less than 2% in 2017.

So why are businesses adopting natural language at such a rapid pace? Let’s explore three key drivers.

Lower Cost & Complexity

It’s certainly been possible to build advance natural language speech interfaces for years. In 2000, for example, speech-enabled IVR applications let American Airlines callers ask to “fly from Austin to Boston, next Wednesday at 8a.m.” and Charles Schwab investors to “buy 100 shares of IBM at the market price.”

The problem was, these applications took a lot of time and a whole lot of money to build.

First a company had to buy and host its own speech recognition and text-to-speech servers. Next it had to hire a team of developers to build the application. Finally, it would train the recognition servers and tune the system until it reached the necessarily level of service. The process could take months and cost close to a million dollars, putting speech out of reach of all but the largest call centers.

Speech and NLP have taken a now-familiar path to the cloud. Like other technologies, NLP has become cheaper and more accessible to a wider variety of businesses. Service teams no longer have to manage software, hardware, and equipment, and can now pay for usage based on monthly demand.

In addition, application development cycles have shrunk because cloud vendors like Google and IBM have trained their recognition servers using massive datasets they’ve acquired as millions of users interact with their cloud-based speech services. This further reduces cost and complexity.

  • 1