The 21st International ACM SIGACCESS Conference on Computers and Accessibility 2019
DOI: 10.1145/3308561.3353781
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Benefit of Highlighting Key Words in Captions for People who are Deaf or Hard of Hearing

Abstract: Recent research has investigated automatic methods for identifying how important each word in a text is for the overall message, in the context of people who are Deaf and Hard of Hearing (DHH) viewing video with captions. We examine whether DHH users report benefits from visual highlighting of important words in video captions. In formative interview and prototype studies, users indicated a preference for underlining of 5%-15% of words in a caption text to indicate that they are important, and they expressed a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…The analysis presented above has revealed that BERT contextualized word-embedding can better represent the importance of words compared to word2vec embeddings, which had been used in prior work on word-importance prediction (Kafle and Huenerfauth, 2019). Research indicates that DHH viewers often follow key terms while skimming through captions, and researchers have proposed approaches to guide DHH readers to quickly identify keywords in caption text through visual highlighting (Kafle et al, 2019b). Our findings may allow broadcasters to use embeddings to determine the important words within a sentence and to highlight those words in captions, to support DHH viewers' ability to read (Amin et al, 2021a) the captions effectively.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The analysis presented above has revealed that BERT contextualized word-embedding can better represent the importance of words compared to word2vec embeddings, which had been used in prior work on word-importance prediction (Kafle and Huenerfauth, 2019). Research indicates that DHH viewers often follow key terms while skimming through captions, and researchers have proposed approaches to guide DHH readers to quickly identify keywords in caption text through visual highlighting (Kafle et al, 2019b). Our findings may allow broadcasters to use embeddings to determine the important words within a sentence and to highlight those words in captions, to support DHH viewers' ability to read (Amin et al, 2021a) the captions effectively.…”
Section: Discussionmentioning
confidence: 99%
“…Existing metrics used in transcription or captioning include Word Error Rate (WER) (Ali and Renals, 2018) or Number of Error in Recognition (NER) (Romero-Fresco and Martínez Pérez, 2015). As noted by Kafle et al (2019b), a major shortcoming of these metrics is that they do not consider the importance of individual words when measuring the accuracy of captioned transcripts (comparing to the reference transcript) and most metrics assign equal weights to each word. DHH viewers rely more heavily on important keywords while skimming through caption text (Kafle et al, 2019b).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We could also imagine post-class collaborative editing by hearing students to correct the captions. Future tests will be made to predict [4] and evaluate the impact of captions on students who are deaf during a lecture. Those tests will also give us clues for an optimized formatting of our captions.…”
Section: Future Workmentioning
confidence: 99%
“…So far, there have been little research about how to format transcriptions for this particular audience (e.g. highlighting keywords [4], [9]). Furthermore, despite ASR systems good performance, decoding errors are still problematic for captions comprehension [8], especially for students who are deaf who "do not read by 'sounding the words out'".…”
Section: Introductionmentioning
confidence: 99%