To assess the correctness of a recognizer output in any instance of a dialogue is a complex task that has been studied thoroughly during the past decade. Its importance relays on the need for robust dialogue systems, capable of dealing with difficulties inherent to human-machine communications: user errors and corrections, speech recognizer errors, error recovery techniques, etc.In this paper, we present a novel approach to the problem of deciding what the user has said. We use confidence measures derived from low level knowledge sources (acoustic and linguistic information) and generated in parallel from several topic-adapted speech recognizers. Each recognizer is aimed to the recognition of a particular topic, and confidence measures are compared through the use of a classifier that lead to a most probable solution.This approach shows to be specially suited for difficult topics, such as proper names or confirmations, which are highly meaningful for error correction tasks. These topics present high error rates when using an application-wide speech recognizer, but recognition correction is greatly enhanced through the use of parallel recognizers. Moreover, the use of topic-adapted recognizers seems to help also in the identification of the user intention and in the detection of outof-application utterances.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.