Background With the growing adult population using electronic hearing devices such as cochlear implants or hearing aids, there is an increasing worldwide need for auditory training (AT) to promote optimal device use. However, financial resources and scheduling conflicts make clinical AT infeasible. Objective To address this gap between need and accessibility, we primarily aimed to develop a mobile health (mHealth) app called Speech Banana for AT. The app would be substantially more affordable and portable than clinical AT; would deliver a validated training model that is reflective of modern techniques; and would track users’ progress in speech comprehension, providing greater continuity between periodic in-person visits. To improve international availability, our secondary aim was to implement the English language training model into Korean as a proof of concept for worldwide usability. Methods A problem- and objective-centered Design Science Research Methodology approach was adopted to develop the Speech Banana app. A review of previous literature and computer-based learning programs outlined current AT gaps, whereas interviews with speech pathologists and users clarified the features that were addressed in the app. Past and present users were invited to evaluate the app via community forums and the System Usability Scale. Results Speech Banana has been implemented in English and Korean languages for iPad and web use. The app comprises 38 lessons, which include analytic exercises pairing visual and auditory stimuli, and synthetic quizzes presenting auditory stimuli only. During quizzes, users type the sentence heard, and the app provides visual feedback on performance. Users may select a male or female speaker and the volume of background noise, allowing for training with a range of frequencies and signal-to-noise ratios. There were more than 3200 downloads of the English iPad app and almost 100 downloads of the Korean app; more than 100 users registered for the web apps. The English app received a System Usability Scale rating of “good” from 6 users, and the Korean app received a rating of “OK” from 16 users. Conclusions Speech Banana offers AT accessibility with a validated curriculum, allowing users to develop speech comprehension skills with the aid of a mobile device. This mHealth app holds potential as a supplement to clinical AT, particularly in this era of global telemedicine.
Minimally invasive PPI followed by debridement and DLIF was a feasible surgical alternative in our consecutive 16 cases of pyogenic spondylitis. In most cases, however the subsidence of anteriorly grafted fusion was inevitable despite successful fusion and eradication of the primary lesion.
Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests.Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram.Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs.Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.