2021
DOI: 10.1109/jbhi.2021.3100368
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of a Novel Speech-in-Noise Test for Hearing Screening: Classification Performance and Transducers’ Characteristics

Abstract: One of the current gaps in teleaudiology is the lack of methods for adult hearing screening viable for use in individuals of unknown language and in varying environments. We have developed a novel automated speech-in-noise test that uses stimuli viable for use in non-native listeners. The test reliability has been demonstrated in laboratory settings and in uncontrolled environmental noise settings in previous studies. The aim of this study was: (i) to evaluate the ability of the test to identify hearing loss u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…The preliminary experimental evidence in non-native listeners reported in [5] confirms that the speech recognition performance of non-native listeners may be similar to that of native listeners when VCV recordings in English are used. It is important to note that such a screening procedure, based on speech stimuli viable for use in listeners of an unknown language, can substantially increase access to screening by eliminating possible barriers related to language, opening the delivery of screening tests at a distance able to identify undetected hearing loss with high accuracy [5,24]. For example, previous studies have considered an application of the adaptive 3AFC procedure to a population of 350 participants, including native and nonnative unscreened adults showing that the speech-in-noise test can accurately identify hearing loss of mild and moderate degree with accuracy up to 0.87 and 0.90, respectively [5,[23][24][25][26]57].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The preliminary experimental evidence in non-native listeners reported in [5] confirms that the speech recognition performance of non-native listeners may be similar to that of native listeners when VCV recordings in English are used. It is important to note that such a screening procedure, based on speech stimuli viable for use in listeners of an unknown language, can substantially increase access to screening by eliminating possible barriers related to language, opening the delivery of screening tests at a distance able to identify undetected hearing loss with high accuracy [5,24]. For example, previous studies have considered an application of the adaptive 3AFC procedure to a population of 350 participants, including native and nonnative unscreened adults showing that the speech-in-noise test can accurately identify hearing loss of mild and moderate degree with accuracy up to 0.87 and 0.90, respectively [5,[23][24][25][26]57].…”
Section: Discussionmentioning
confidence: 99%
“…To account for common situations in which the language is not known (e.g., screening in multilingual settings, screening at a distance), we recently developed a novel, automated speech-in-noise test for hearing screening in multilingual settings. Specifically, we used a set of nonsense VCV stimuli viable for use across listeners of various languages presented in a multiple-choice format using an adaptive three-alternative forced-choice (3AFC) task [5,[23][24][25][26]. Several ways to communicate the individual responses to stimuli in psychophysical tasks have been used in the literature, e.g., automatization of typed and spoken responses, for example, using automated speech recognition algorithms [27].…”
Section: Speech-in-noise Screening Test Designmentioning
confidence: 99%
“…Moreover, the data augmentation process seems to simplify the intrinsic behavior of certain variables, by cleaning up some regions of uncertainty in classification. For example, the model trained on synthetic dataset #15 amplifies the well-known relationship between SRT and hearing loss and allows us to define a cut-off at -9.49 dB SNR which is similar to the one suggested by previous studies (e.g., [43]). As expected, the LLM model trained on the synthetic dataset #11 (worse MMD and C2S metrics) has a much higher number of rules, with lower average covering, different structure and different cut-off values, than the one trained on the real dataset.…”
Section: B Analysis Of Similarity Between Rulesmentioning
confidence: 94%
“…This rule synthesizes well the relationship between speech recognition ability and hearing loss. Conversely, subjects with a poor performance of speech recognition in noise (i.e., lower than 59 correct responses) as in R r,HL 1 will more likely suffer from hearing loss [23], [43]. Fig.…”
Section: ) Analysis Of Classification Performancementioning
confidence: 99%
“…Test difficulty and duration depends this way on the ability of the subject in discriminating speech stimuli in noise. The full test and procedure is explained in [10]. Usually in speech-in-noise tests the outcome is the speech reception threshold (SRT) defined as the minimum SNR at which an individual can recognize a certain percentage of the speech material (i.e., 79.4% in the three-alternatives design here used).…”
Section: Study Design and Datamentioning
confidence: 99%