With growing complexities in our society, mental stress has become inevitable in every human life. Long-term mental stress condition could instigate several chronic diseases and thus require its early evaluation. Existing mental stress estimation techniques mostly uses complicated, multi-channel and expert dependent electroencephalogram (EEG) based approaches. Moreover, the respiratory signal presents promising stress-related information, but its acquisition is also complicated and needs multimodal assistance. Hence, in this research a unique approach based on multimodal characterization of the easy-to-acquire Photoplethysmogram (PPG) signal is proposed to assess the stressed condition. Exclusively, the developed algorithm not only uses a primary PPG feature, but also derives the respiratory rate from the same PPG signal via simplified methodologies. The technique is evaluated on the PPG recordings collected from the publicly available DEAP dataset. Efficiency of these easy-to-compute features is then assessed via a simple threshold-based classification technique to categorize the stressed and the relaxed conditions with an average accuracy of 98.43%. Compared to the existing methods, the proposed algorithm not only shows improved performance but the associated simple methodology with minimum acquisition load also justifies its applicability in real-time standalone, personal healthcare applications.
Contemporary human-machine-interfaces (HMIs) employ a wide range of human expressions to provide assistive support to the elderly and disabled population. Based on the disability type, expressions conveyed in terms of eye movements are often found to provide the most e cient way of communication. Nowadays, standard Electroencephalogram (EEG) based arrangements, used to analyze neurological states are also being adopted for the detection of eye movements. Although, a majority of the EEG-based state-of-the-arts researches either detects eye-movements in a lesser direction or uses a higher feature dimension with limited classi cation accuracy. In this study, a robust, simple and automated algorithm is proposed that uses the analysis of the EEG signal to classify six different types of eye movement. The algorithm uses discrete wavelet transform (DWT) on the EEG signals acquired from six different leads to eliminate a wide range of noise and artefacts. Then, two features per lead are extracted from the reconstructed wavelet coe cients and combined to form a binary feature map. Finally, a unique feature obtained from the calculated weighted sum of the binary map is used to classify six types of eye movements via a threshold-based technique. The algorithm presents high average accuracy (Acc), sensitivity (Se), speci city (Sp) of 95.85%, 95.83% and 95.83% respectively, using a single feature value only. Compared to other state-of-the-art methods, the adopted simple methodologies and the obtained results indicate the immense potential of the proposed algorithm to be implemented in personal assistive applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.