Individuals with severe speech and physical impairments may have concomitant visual acuity impairments (VAI) or ocular motility impairments (OMI) impacting visual BCI use. We report on the use of the Shuffle Speller typing interface for an SSVEP BCI copy-spelling task under three conditions: simulated VAI, simulated OMI, and unimpaired vision. To mitigate the effect of visual impairments, we introduce a method that adaptively selects a user-specific trial length to maximize expected information transfer rate (ITR); expected ITR is shown to closely approximate the rate of correct letter selections. All participants could type under the unimpaired and simulated VAI conditions, with no significant differences in typing accuracy or speed. Most participants (31 of 37) could not type under the simulated OMI condition; some achieved high accuracy but with slower typing speeds. Reported workload and discomfort were low, and satisfaction high, under the unimpaired and simulated VAI conditions. Implications and future directions to examine effect of visual impairment on BCI use is discussed.
Icon-based communication systems are widely used in the field of Augmentative and Alternative Communication. Typically, icon-based systems have lagged behind word-and character-based systems in terms of predictive typing functionality, due to the challenges inherent to training icon-based language models. We propose a method for synthesizing training data for use in icon-based language models, and explore two different modeling strategies. 1 One notable exception to this trend is the system used in SymbolPath (Wiegand and Patel, 2012b), which uses semantic frames to attempt non-sequential symbol prediction (Wiegand and Patel, 2012a). This work, however, was limited to a specific and small icon set.
Access to communication is critical for individuals with late-stage amyotrophic lateral sclerosis (ALS) and minimal volitional movement, but they sometimes present with concomitant visual or ocular motility impairments that affect their performance with eye tracking or visual brain-computer interface (BCI) systems. In this study, we explored the use of modified eye tracking and steady state visual evoked potential (SSVEP) BCI, in combination with the Shuffle Speller typing interface, for this population. Two participants with late-stage ALS, visual impairments, and minimal volitional movement completed a single-case experimental research design comparing copy-spelling performance with three different typing systems: (1) commercially available eye tracking communication software, (2) Shuffle Speller with modified eye tracking, and (3) Shuffle Speller with SSVEP BCI. Participant 1 was unable to type any correct characters with the commercial system, but achieved accuracies of up to 50% with Shuffle Speller eye tracking and 89% with Shuffle Speller BCI. Participant 2 also had higher maximum accuracies with Shuffle Speller, typing with up to 63% accuracy with eye tracking and 100% accuracy with BCI. However, participants’ typing accuracy for both Shuffle Speller conditions was highly variable, particularly in the BCI condition. Both the Shuffle Speller interface and SSVEP BCI input show promise for improving typing performance for people with late-stage ALS. Further development of innovative BCI systems for this population is needed.
Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.