Purpose Individuals with neurogenic speech disorders require ongoing therapeutic support to achieve functional communication goals. Alternative methods for service delivery, such as tablet-based speech therapy applications, may help bridge the gap and bring therapeutic interventions to the patient in an engaging way. The purpose of this study was to evaluate an iPad-based speech therapy app that uses automatic speech recognition (ASR) software to provide feedback on speech accuracy to determine the ASR's accuracy against human judgment and whether participants' speech improved with this ASR-based feedback. Method Five participants with apraxia of speech plus aphasia secondary to stroke completed an intensive 4-week at-home therapy program using a novel word training app with built-in ASR. Multiple baselines across participants and behaviors designs were employed, with weekly probes and follow-up at 1 month posttreatment. Four sessions a week of 100 practice trials each were prescribed, with 1 being clinician-run and the remainder done independently. Dependent variables of interest were ASR–human agreement on accuracy during practice trials and human-judged word production accuracy over time in probes. Also, user experience surveys were completed immediately posttreatment. Results ASR–human agreement on accuracy averaged ~80%, which is a common threshold applied for interrater agreement. All participants demonstrated improved word production accuracy over time with the ASR-based feedback and maintenance of gains after 1 month. All participants reported enjoying using the app with support of a speech pathologist. Conclusion For these participants with apraxia of speech plus aphasia due to stroke, satisfactory gains were made in word production accuracy with an app-based therapy program providing ASR-based feedback on accuracy. Findings support further testing of this ASR-based approach as a supplement to clinician-run sessions to assist clients with similar profiles in achieving higher amount and intensity of practice as well as empowering them to manage their own therapy program. Supplemental Material https://doi.org/10.23641/asha.8206628
Think-aloud protocols are commonly used to evaluate player experiences of video games but suffer from a lack of objectivity and timeliness. On the other hand, quantitative captures of physiological data are effective; providing detailed, unbiased and continuous responses of players, but lack contexts for interpretation. This paper documents how both approaches could be used together in practice by comparing video-cued retrospective think-aloud data and physiological data collected during a video gameplay experiment. We observed that many interesting physiological responses did not feature in participants' think-aloud data, and conversely, reports of interesting experiences were sometimes not observed in the collected physiological data. Through learnings from our experiment, we present some of the challenges when combining these approaches and offer some guidelines as to how qualitative and quantitative data can be used together to gain deeper insights into player experiences.
The availability of consumer-facing virtual reality (VR) headsets makes virtual training an attractive alternative to expensive traditional training. Recent works showed that virtually trained workers perform bimanual assembly tasks equally well as ones trained with traditional methods. This paper presents a study that investigated how levels of immersion affect learning transfer between virtual and physical bimanual gearbox assembly tasks. The study used a with-in subject design and examined three different virtual training systems i.e., VR training with direct 3D inputs (HTC VIVE Pro), VR training without 3D inputs (Google Cardboard), and passive video-based training. 23 participants were recruited. The training effectiveness was measured by participant’s performance of assembling 3D-printed copies of the gearboxes in two different timings: immediately after and 2 weeks after the training. The result showed that participants preferred immersive VR training. Surprisingly, despite being less favourable, the subjects’ performance of video-based training were similar to training on HTC VIVE Pro. However, video training led to a significant performance decrease in the retention test session 2 weeks after the training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.