2006
DOI: 10.1016/j.sigpro.2006.02.047
|View full text |Cite
|
Sign up to set email alerts
|

Towards a taxonomy of error-handling strategies in recognition-based multi-modal human–computer interfaces

Abstract: In this paper, we survey the different types of error-handling strategies that have been described in the literature on recognition-based human-computer interfaces. A wide range of strategies can be found in spoken human-machine dialogues, handwriting systems, and multimodal natural interfaces.We then propose a taxonomy for classifying error-handling strategies that has the following three dimensions: the main actor in the error-handling process (machine versus user), the purpose of the strategy (error prevent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 65 publications
0
9
0
Order By: Relevance
“…Strategies to correct errors include repeating and rephrasing the utterances, spelling out words, contradicting a system response, correcting using a different modality (e.g., manual entry instead of speech), and restarting, among others [59][60][61][62].…”
Section: Which Kinds Of Errors Can Occur?mentioning
confidence: 99%
“…Strategies to correct errors include repeating and rephrasing the utterances, spelling out words, contradicting a system response, correcting using a different modality (e.g., manual entry instead of speech), and restarting, among others [59][60][61][62].…”
Section: Which Kinds Of Errors Can Occur?mentioning
confidence: 99%
“…They might say "roll clockwise" and perform a counterclockwise movement with their hand. Multimodal systems can suffer from compounding errors caused by incorrect recognition, or mismatched interactions such as the ones seen in this study [6]. These errors could take more time than standard uni-modal errors to correct or cause compounding errors when a second error is made during an attempt to correct the first.…”
Section: Limitations Of the Studymentioning
confidence: 79%
“…Human factors research in multimodal interaction concerned with recognition errors [11] is a well researched topic in multimodal interfaces [2,13,15], where investigations were typically concerned with error handling strategies devised by users in the face of recognition errors (e.g., modality switching to a 'familiar', more efficient modality). In speech-based interfaces, a common finding is that the most intuitive and instinctive way for correcting errors in speech is to repeat the spoken utterance [21] and hyperarticulate it [14].…”
Section: Dealing With Recognition Errors Across Modalitiesmentioning
confidence: 99%
“…In a follow-up study by [6], they tested 3 commercial Automatic Speech Recognition (ASR) systems where they found that a little over 50% of the time, subjects would continue to repeat the utterance to a spiral depth of level 3. However, while recognition errors have been well studied in domains such as speech-based interfaces [2,14], handwriting recognition [16,17], and multimodal interfaces [13], less attention has been given to usability issues surrounding device-based gesture interaction. An exception is the study by [8], where they investigated user tolerance for errors in touch-less computer vision-enabled gesture interaction under both desktop (keyboard readily available in front of subjects) and ubiquitous computing settings (keyboard not readily available).…”
Section: Dealing With Recognition Errors Across Modalitiesmentioning
confidence: 99%