Cognitive workload is one of the widely invoked human factors in the areas of humanmachine interaction (HMI) and neuroergonomics. The precise assessment of cognitive and mental workload (MWL) is vital and requires accurate neuroimaging to monitor and evaluate the cognitive states of the brain. In this study, we have decoded four classes of MWL using long short-term memory (LSTM) with 89.31% average accuracy for brain-computer interface (BCI). The brain activity signals are acquired using functional near-infrared spectroscopy (fNIRS) from the prefrontal cortex (PFC) region of the brain. We performed a supervised MWL experimentation with four varying MWL levels on 15 participants (both male and female) and 10 trials of each MWL per participant. Realtime four-level MWL states are assessed using fNIRS system, and initial classification is performed using three strong machine learning (ML) techniques, support vector machine (SVM), k-nearest neighbor (k-NN), and artificial neural network (ANN) with obtained average accuracies of 54.33, 54.31, and 69.36%, respectively. In this study, novel deep learning (DL) frameworks are proposed, which utilizes convolutional neural network (CNN) and LSTM with 87.45 and 89.31% average accuracies, respectively, to solve high-dimensional four-level cognitive states classification problem. Statistical analysis, t-test, and one-way F-test (ANOVA) are also performed on accuracies obtained through ML and DL algorithms. Results show that the proposed DL (LSTM and CNN) algorithms significantly improve classification performance as compared with ML (SVM, ANN, and k-NN) algorithms.
The brain–computer interface (BCI) provides an alternate means of communication between the brain and external devices by recognizing the brain activities and translating them into external commands. The functional Near-Infrared Spectroscopy (fNIRS) is becoming popular as a non-invasive modality for brain activity detection. The recent trends show that deep learning has significantly enhanced the performance of the BCI systems. But the inherent bottleneck for deep learning (in the domain of BCI) is the requirement of the vast amount of training data, lengthy recalibrating time, and expensive computational resources for training deep networks. Building a high-quality, large-scale annotated dataset for deep learning-based BCI systems is exceptionally tedious, complex, and expensive. This study investigates the novel application of transfer learning for fNIRS-based BCI to solve three objective functions (concerns), i.e., the problem of insufficient training data, reduced training time, and increased accuracy. We applied symmetric homogeneous feature-based transfer learning on convolutional neural network (CNN) designed explicitly for fNIRS data collected from twenty-six (26) participants performing the n-back task. The results suggested that the proposed method achieves the maximum saturated accuracy sooner and outperformed the traditional CNN model on averaged accuracy by 25.58% in the exact duration of training time, reducing the training time, recalibrating time, and computational resources.
Mental workload is a neuroergonomic human factor, which is widely used in planning a system's safety and areas like brain–machine interface (BMI), neurofeedback, and assistive technologies. Robotic prosthetics methodologies are employed for assisting hemiplegic patients in performing routine activities. Assistive technologies' design and operation are required to have an easy interface with the brain with fewer protocols, in an attempt to optimize mobility and autonomy. The possible answer to these design questions may lie in neuroergonomics coupled with BMI systems. In this study, two human factors are addressed: designing a lightweight wearable robotic exoskeleton hand that is used to assist the potential stroke patients with an integrated portable brain interface using mental workload (MWL) signals acquired with portable functional near-infrared spectroscopy (fNIRS) system. The system may generate command signals for operating a wearable robotic exoskeleton hand using two-state MWL signals. The fNIRS system is used to record optical signals in the form of change in concentration of oxy and deoxygenated hemoglobin (HbO and HbR) from the pre-frontal cortex (PFC) region of the brain. Fifteen participants participated in this study and were given hand-grasping tasks. Two-state MWL signals acquired from the PFC region of the participant's brain are segregated using machine learning classifier—support vector machines (SVM) to utilize in operating a robotic exoskeleton hand. The maximum classification accuracy is 91.31%, using a combination of mean-slope features with an average information transfer rate (ITR) of 1.43. These results show the feasibility of a two-state MWL (fNIRS-based) robotic exoskeleton hand (BMI system) for hemiplegic patients assisting in the physical grasping tasks.
The constantly evolving human–machine interaction and advancement in sociotechnical systems have made it essential to analyze vital human factors such as mental workload, vigilance, fatigue, and stress by monitoring brain states for optimum performance and human safety. Similarly, brain signals have become paramount for rehabilitation and assistive purposes in fields such as brain–computer interface (BCI) and closed-loop neuromodulation for neurological disorders and motor disabilities. The complexity, non-stationary nature, and low signal-to-noise ratio of brain signals pose significant challenges for researchers to design robust and reliable BCI systems to accurately detect meaningful changes in brain states outside the laboratory environment. Different neuroimaging modalities are used in hybrid settings to enhance accuracy, increase control commands, and decrease the time required for brain activity detection. Functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) measure the hemodynamic and electrical activity of the brain with a good spatial and temporal resolution, respectively. However, in hybrid settings, where both modalities enhance the output performance of BCI, their data compatibility due to the huge discrepancy between their sampling rate and the number of channels remains a challenge for real-time BCI applications. Traditional methods, such as downsampling and channel selection, result in important information loss while making both modalities compatible. In this study, we present a novel recurrence plot (RP)-based time-distributed convolutional neural network and long short-term memory (CNN-LSTM) algorithm for the integrated classification of fNIRS EEG for hybrid BCI applications. The acquired brain signals are first projected into a non-linear dimension with RPs and fed into the CNN to extract essential features without performing any downsampling. Then, LSTM is used to learn the chronological features and time-dependence relation to detect brain activity. The average accuracies achieved with the proposed model were 78.44% for fNIRS, 86.24% for EEG, and 88.41% for hybrid EEG-fNIRS BCI. Moreover, the maximum accuracies achieved were 85.9, 88.1, and 92.4%, respectively. The results confirm the viability of the RP-based deep-learning algorithm for successful BCI systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.