The proposed VB-PARE system contributes to the state-of-art respiration monitoring methods by expanding the idea of passive and noninvasive airway resistance measurement.
Abstract-Stroke survivors with severe upper limb (UL) impairment face years of therapy to recover function. Robotassisted therapy (RT) is increasingly used in the field for goaloriented rehabilitation as a means to improve UL function. To be used effectively for wrist and hand therapy, the current RT systems require the patient to have a minimal active range of movement in the UL, and those that do not have active voluntary movement cannot use these systems. We have overcome this limitation by harnessing tongue motion to allow patients to control a robot using synchronous tongue and hand movement. This novel RT device combines a commercially available UL exoskeleton, the Hand Mentor, and our custom-designed Tongue Drive System as its controller. We conducted a proof-of-concept study on six nondisabled participants to evaluate the system usability and a case series on three participants with movement limitations from poststroke hemiparesis. Data from two stroke survivors indicate that for patients with chronic, moderate UL impairment following stroke, a 15-session training regimen resulted in modest decreases in impairment, with functional improvement and improved quality of life. The improvement met the standard of minimal clinically important difference for activities of daily living, mobility, and strength assessments.
Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators’ motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the Multimodal Speech Capture System (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice, are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators’ motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words “Hello World”. A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak as naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.