Nuance Communications and WGBH’s National Center for Accessible Media have teamed up on a project to develop a prototype system that will automatically evaluate the accuracy of real-time captions for live news programming, and hopefully improve on live captions for deaf and hearing-impaired viewers.
Funded by a grant from the U.S. Department of Education, the Caption Accuracy Metrics project will identify and weigh the impact of different kinds of errors in closed captions, which the organizations believe will help improve the way captions are produced and presented for the news and other live programs by real-time captioners called stenocaptioners.
NCAM staff recently convened a technical review panel of major stakeholders in caption quality at the CBS Broadcast Center in New York. Representatives from broadcast and cable networks, caption agencies, deaf education experts, and the National Court Reporters Association discussed the value of the project’s work to their organizations. In addition, the FCC recently announced that it is refreshing the record of comments on caption quality, solicited in two notices of proposed rulemaking over the past five years, and will determine with consumer and industry feedback if new standards of quality should be set.
Nuance Communications, the developer of Dragon NaturallySpeaking speech recognition technology, will create customized language processing, data analysis, and benchmarking tools for the project. Their proposed prototype tool will enable tracking of audio and captioning, comparing the spoken word with the caption output and rating the accuracy levels based on error type and severity.
Others involved as advisors to the project include the National Institute of Standards and Technology, Gallaudet University and the National Technical Institute for the Deaf.