2015
DOI: 10.1016/j.specom.2015.09.010
|View full text |Cite
|
Sign up to set email alerts
|

Using automatic speech recognition to assess spoken responses to cognitive tests of semantic verbal fluency

Abstract: Cognitive tests of verbal fluency (VF) consist of verbalizing as many words as possible in one minute that either start with a specific letter of the alphabet or belong to a specific semantic category. These tests are widely used in neurological, psychiatric, mental health, and school settings and their validity for clinical applications has been extensively demonstrated. However, VF tests are currently administered and scored manually making them too cumbersome to use, particularly for longitudinal cognitive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
24
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 35 publications
2
24
0
Order By: Relevance
“…Our results show an overall low error rate of 20.01% for the automated system, compared to the manual transcripts. This in itself represents an improvement over results of other authors using ASR systems for evaluating the SVF tasks [53, 54]. In line with previous research, diagnostic groups differ significantly in the number of errors made by ASR (Kruskal-Wallis, χ 2 = 13.7, df = 2, p < 0.001).…”
Section: Discussionsupporting
confidence: 86%
“…Our results show an overall low error rate of 20.01% for the automated system, compared to the manual transcripts. This in itself represents an improvement over results of other authors using ASR systems for evaluating the SVF tasks [53, 54]. In line with previous research, diagnostic groups differ significantly in the number of errors made by ASR (Kruskal-Wallis, χ 2 = 13.7, df = 2, p < 0.001).…”
Section: Discussionsupporting
confidence: 86%
“…Others have used ASR techniques to examine the VF test. In Pakhomov et al (2015), the same Kaldi ASR toolkit (Povey et al, 2011) and in König et al (2018), Google's Automatic Speech Recognition (ASR) service were used for automatic transcription of responses. These studies either attempt to predict the raw VF score based on automatically generated response (Pakhomov et al, 2015) or only investigate count-based measures beside the raw VF score for differentiating MCI from cognitively intact participants (König et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…These probabilities represent the confidence with which the ASR system selected the word from possible alternatives for a given portion of the speech signal. In previous work (Pakhomov et al, 2015), we have used this feature to develop a reliability filter that relied on the proportion of words produced by the ASR system with low confidence to identify speech samples that are not suitable for fully automated analysis and may require manual transcription. We applied this strategy to the current study sample and found that, for example, setting aside 30% of the samples for manual transcription improves the ICC to 0.93 for the SVF test and 0.84 for the PVF test on the remaining 70% of the samples (data not shown).…”
Section: Resultsmentioning
confidence: 99%
“…Phonemic verbal fluency and semantic verbal fluency tests were administered using a validated computerized tool (Pakhomov et al, 2015). An iPad app was used to provide standardized test instructions (via iPad) and to audio-record the participant’s responses.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation