SHARE



ABOUT THE AUTHOR


Mike Bergelson
Mike Bergelson is responsible for developing new product and business model strategies for Cisco's Unified Communications portfolio. Prior to this...
Read Full Bio >>
SHARE



Mike Bergelson | July 25, 2010 |

 
   

The State of Transcription for UC: Part 2.3: Areas of Innovation

The State of Transcription for UC: Part 2.3: Areas of Innovation Medical transcriptions and closed captioning are likely areas for growing benefits as technology and human processes improve.

Medical transcriptions and closed captioning are likely areas for growing benefits as technology and human processes improve.

This is a continuation of my blog post from last week, part of a series of posts on the application of transcription in unified communications.

In this and the previous two posts, I discuss the state of transcription today. In the final post in the series, I'll address where I believe the market may be going and some key areas of innovation that can help us derive more benefit from recorded audio and video content.

Medical Transcription
The medical transcription market is enormous--around $20–25B globally and is expected to grow around 15–20% per year for the foreseeable future.

As most people know (or could guess) the traditional model of live agent transcriptions is slowly giving way to a semi-automated solution where system-generated transcriptions are edited by humans. Research suggests that agent productivity increases 30–50% (some vendors claim that productivity doubles) with a first pass by an automatic speech recognition (ASR) engine, further underscoring the economic benefit of this approach.

Clinicians are also using speaker-dependent ASR engines to obviate the need for outsourcing altogether, although the growth of this approach has been relatively slow as clinicians perceive a high set-up cost (both the user and language models must be trained) and may not be willing to commit to the necessary behavior changes.

To wit, a Nuance Communications employee concedes that it is "often more important to train users how to use speech than it is to train speech systems how to recognize users" in his thorough response to a thought-provoking blog post by Robert Fortner cleverly titled Rest in Peas: The Unrecognized Death of Speech Recognition.

In large part, the dollars attached to creating efficiencies in the medical transcription market will inure to the benefit of UC transcription solutions since cost, turn-around time, accuracy and privacy are all major issues for medical transcriptions. Examples of applicable innovations include passive speaker-dependent language model training, the use of multiple speech engines to increase accuracy and improved workflow for human editing.

Closed Captioning
Recorded video is quickly becoming a common medium for intra- (e.g., training) and inter-enterprise (e.g., marketing) communications. The same benefits--speed of consumption, improved retention, searchability, etc.--that users experience with transcriptions for recordings of live events can be found with "canned" or made-for-video content.

There isn't too much demand for real-time transcription in the enterprise context (there are use cases around company-wide meetings requiring real-time translation), but the approaches applied help guide some thinking that I'll revisit in my final post in this series.

Traditionally, real-time closed captioning is created in a two part process. First the dialog is converted by a stenographer (think the person who's asked to read back the testimony in your favorite courtroom drama) into a phonetic representation of what's been said. Stenographers and closed captioners routinely record dialog at approximately 200 words per minute (as one would expect given that this is the upper bound on typical speech rates).

The output of the stenotype machine is then fed into a system that converts the phonemes into actual words. Inaccuracies in this process account for the odd words that we see in the closed captioning ribbons from time to time on TV screens in public places (or at home if we rely on closed captioning).

In some cases, as with the BBC, agents with crisp enunciation actually re-speak what's being said into stenomasks (specially-designed masks with a microphone embedded inside that cover one's mouth to block outside noise). This parallel dictation is then fed into a speaker-dependent ASR engine to produce the near-real time transcription with high accuracy.

Interestingly, closed captioning in the US and UK appears to be used most often (by a factor of four to one!) by viewers for whom English is a second language rather than by the intended audience--those with hearing impairments. As we start to transcribe video and audio in the enterprise context, I believe we can count on similar examples of "unintended benefits."

Google made big news in the closed captioning world (as with voicemail transcription) by announcing an Automatic Caption Feature for YouTube videos in November 2009. In a clever (or just honest) move, much of the messaging around this feature anticipates the inaccuracies of the machine translation and re-focuses attention on the important benefit of making video content accessible to hearing impaired viewers around the world.

The clever use of speaker-dependent speech engines and crowd-sourcing create some interesting possibilities in other areas, as we'll explore in my next post.





COMMENTS



Enterprise Connect Orlando 2017
March 27-30 | Orlando, FL
Connect with the Entire Enterprise Communications & Collaboration Ecosystem


Stay Up-to-Date: Hear industry visionaries in Keynotes and General Sessions delivering the latest insight on UC, mobility, collaboration and cloud

Grow Your Network: Connect with the largest gathering of enterprise IT and business leaders and influencers

Learn From Industry Leaders: Attend a full range of Conference Sessions, Free Programs and Special Events

Evaluate All Your Options: Engage with 190+ of the leading equipment, software and service providers

Have Fun! Mingle with sponsors, exhibitors, attendees, guest speakers and industry players during evening receptions

Special Offer - Save $200 Off Advance Rates

Register now with code NOJITTEREB to save $200 Off Advance Rates or get a FREE Expo Pass!

March 8, 2017

Enterprise IT's ability to innovate is critical to the success of the business -- 80% of CIOs agree. But the CIO role has never been more challenging than it is today, with rising operational respo

February 22, 2017

Sick of video call technology that make participants look like they're in the witness protection program? Turns out youre not alone. Poor-quality video solutions can give users an unprofessional ap

February 7, 2017

Securing voice communications used to be very simple since it was generally a closed system. However, with unified communications (UC) you no longer have the walled protection offered by a dedicate

February 24, 2017
UC analyst Blair Pleasant sorts through the myriad cloud architectural models underlying UCaaS and CCaaS offerings, and explains why knowing the differences matter.
February 17, 2017
From the most basics of basics to the hidden gotchas, UC consultant Melissa Swartz helps demystify the complex world of SIP trunking.
February 7, 2017
UC&C consultant Kevin Kieller, a partner at enableUC, shares pointers for making the right architectural choices for your Skype for Business deployment.
February 1, 2017
Elka Popova, a Frost & Sullivan program director, shares a status report on the UCaaS market today and offers her perspective on what large enterprises need before committing to UC in the cloud.
January 26, 2017
Andrew Davis, co-founder of Wainhouse Research and chair of the Video track at Enterprise Connect 2017, sorts through the myriad cloud video service options and shares how to tell if your choice is en....
January 23, 2017
Sheila McGee-Smith, Contact Center/Customer Experience track chair for Enterprise Connect 2017, tells us what we need to know about the role cloud software is playing in contact centers today.