In assistive robotics, research in Brain-Computer-Interface aims to understand human intent with the goal to enhance Human-Robot-Interaction. In this research, a framework to enable a person with an upper limb disability to use an assistive system and maintain self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verification through head movements. The interlinked functional components are an EEG sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. The ability of the system to recognize a facial expression, time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence are evaluated. Based on the evaluation, personalized training data set should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. Initial evaluation of the developed framework using three volunteers exhibited a 100% success rate in their ability to follow the behavioral sequence and control the system providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.
Research in Brain-Computer Interface (BCI) aims to understand human intent with the goal to enhance Human-Robot Interaction (HRI) especially in the field of assistive robotics. The goal of this research is to develop a behavioral sequence based framework to help persons with upper limb disabilities to maintain self-dependence. The framework aims to operate in stages and links multiple functional components to identify human intent and control a robotic arm. The development, operation, and evaluation of the framework and the linked functional components to acquire, process, evaluate, and map BCI signals generated using facial expressions and head movements to predefined actions will be introduced.The framework will integrate multiple functional components such as a non-invasive BCI control device, a vibro-tactile haptic feedback device, a visual feedback environment, the evaluation and training platform, and a robotic arm. The robot pick, move and place actions are mapped to different facial expressions and presented using haptic and visual feedback to the user for classified action verification before performing the process using a robotic arm. The initial evaluation of the developed framework was 100% successful with two volunteers who also provided constructive feedback. The initial successful evaluation provides confidence to further test the framework with more volunteers to identify limitations and/or areas of improvement and its application for further research in HRI as it applies to assistive robotic systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.