SHARE



ABOUT THE AUTHOR


Mike Bergelson
Mike Bergelson is responsible for developing new product and business model strategies for Cisco's Unified Communications portfolio. Prior to this...
Read Full Bio >>
SHARE



Mike Bergelson | July 25, 2010 |

 
   

The State of Transcription for UC: Part 2.3: Areas of Innovation

The State of Transcription for UC: Part 2.3: Areas of Innovation Medical transcriptions and closed captioning are likely areas for growing benefits as technology and human processes improve.

Medical transcriptions and closed captioning are likely areas for growing benefits as technology and human processes improve.

This is a continuation of my blog post from last week, part of a series of posts on the application of transcription in unified communications.

In this and the previous two posts, I discuss the state of transcription today. In the final post in the series, I'll address where I believe the market may be going and some key areas of innovation that can help us derive more benefit from recorded audio and video content.

Medical Transcription
The medical transcription market is enormous--around $20–25B globally and is expected to grow around 15–20% per year for the foreseeable future.

As most people know (or could guess) the traditional model of live agent transcriptions is slowly giving way to a semi-automated solution where system-generated transcriptions are edited by humans. Research suggests that agent productivity increases 30–50% (some vendors claim that productivity doubles) with a first pass by an automatic speech recognition (ASR) engine, further underscoring the economic benefit of this approach.

Clinicians are also using speaker-dependent ASR engines to obviate the need for outsourcing altogether, although the growth of this approach has been relatively slow as clinicians perceive a high set-up cost (both the user and language models must be trained) and may not be willing to commit to the necessary behavior changes.

To wit, a Nuance Communications employee concedes that it is "often more important to train users how to use speech than it is to train speech systems how to recognize users" in his thorough response to a thought-provoking blog post by Robert Fortner cleverly titled Rest in Peas: The Unrecognized Death of Speech Recognition.

In large part, the dollars attached to creating efficiencies in the medical transcription market will inure to the benefit of UC transcription solutions since cost, turn-around time, accuracy and privacy are all major issues for medical transcriptions. Examples of applicable innovations include passive speaker-dependent language model training, the use of multiple speech engines to increase accuracy and improved workflow for human editing.

Closed Captioning
Recorded video is quickly becoming a common medium for intra- (e.g., training) and inter-enterprise (e.g., marketing) communications. The same benefits--speed of consumption, improved retention, searchability, etc.--that users experience with transcriptions for recordings of live events can be found with "canned" or made-for-video content.

There isn't too much demand for real-time transcription in the enterprise context (there are use cases around company-wide meetings requiring real-time translation), but the approaches applied help guide some thinking that I'll revisit in my final post in this series.

Traditionally, real-time closed captioning is created in a two part process. First the dialog is converted by a stenographer (think the person who's asked to read back the testimony in your favorite courtroom drama) into a phonetic representation of what's been said. Stenographers and closed captioners routinely record dialog at approximately 200 words per minute (as one would expect given that this is the upper bound on typical speech rates).

The output of the stenotype machine is then fed into a system that converts the phonemes into actual words. Inaccuracies in this process account for the odd words that we see in the closed captioning ribbons from time to time on TV screens in public places (or at home if we rely on closed captioning).

In some cases, as with the BBC, agents with crisp enunciation actually re-speak what's being said into stenomasks (specially-designed masks with a microphone embedded inside that cover one's mouth to block outside noise). This parallel dictation is then fed into a speaker-dependent ASR engine to produce the near-real time transcription with high accuracy.

Interestingly, closed captioning in the US and UK appears to be used most often (by a factor of four to one!) by viewers for whom English is a second language rather than by the intended audience--those with hearing impairments. As we start to transcribe video and audio in the enterprise context, I believe we can count on similar examples of "unintended benefits."

Google made big news in the closed captioning world (as with voicemail transcription) by announcing an Automatic Caption Feature for YouTube videos in November 2009. In a clever (or just honest) move, much of the messaging around this feature anticipates the inaccuracies of the machine translation and re-focuses attention on the important benefit of making video content accessible to hearing impaired viewers around the world.

The clever use of speaker-dependent speech engines and crowd-sourcing create some interesting possibilities in other areas, as we'll explore in my next post.





COMMENTS



April 19, 2017

Now more than ever, enterprise contact centers have a unique opportunity to lead the way towards complete, digital transformation. Moving your contact center to the cloud is a starting point, quick

April 5, 2017

Its no secret that the cloud offers significant benefits to enterprises - including cost reduction, scalability, higher efficiency, and more flexibility. If your phone system and contact center are

March 22, 2017

As today's competitive business environments push workforces into overdrive, many enterprises are seeking ways of streamlining workflows while optimizing productivity, business agility, and speed.

April 20, 2017
Robin Gareiss, president of Nemertes Research, shares insight gleaned from the firm's 12th annual UCC Total Cost of Operations study.
March 23, 2017
Tim Banting, of Current Analysis, gives us a peek into what the next three years will bring in advance of his Enterprise Connect session exploring the question: Will there be a new model for enterpris....
March 15, 2017
Andrew Prokop, communications evangelist with Arrow Systems Integration, discusses the evolving role of the all-important session border controller.
March 9, 2017
Organizer Alan Quayle gives us the lowdown on programmable communications and all you need to know about participating in this pre-Enterprise Connect hackathon.
March 3, 2017
From protecting against new vulnerabilities to keeping security assessments up to date, security consultant Mark Collier shares tips on how best to protect your UC systems.
February 24, 2017
UC analyst Blair Pleasant sorts through the myriad cloud architectural models underlying UCaaS and CCaaS offerings, and explains why knowing the differences matter.
February 17, 2017
From the most basics of basics to the hidden gotchas, UC consultant Melissa Swartz helps demystify the complex world of SIP trunking.
February 7, 2017
UC&C consultant Kevin Kieller, a partner at enableUC, shares pointers for making the right architectural choices for your Skype for Business deployment.
February 1, 2017
Elka Popova, a Frost & Sullivan program director, shares a status report on the UCaaS market today and offers her perspective on what large enterprises need before committing to UC in the cloud.
January 26, 2017
Andrew Davis, co-founder of Wainhouse Research and chair of the Video track at Enterprise Connect 2017, sorts through the myriad cloud video service options and shares how to tell if your choice is en....
January 23, 2017
Sheila McGee-Smith, Contact Center/Customer Experience track chair for Enterprise Connect 2017, tells us what we need to know about the role cloud software is playing in contact centers today.