An important function of the construction of the Brain-Computer Interface (BCI) device is the development of a model that is able to recognize emotions from electroencephalogram (EEG) signals. Research in this area is very challenging because the EEG signal is non-stationary, non-linear, and contains a lot of noise due to artifacts caused by muscle activity and poor electrode contact. EEG signals are recorded with non-invasive wearable devices using a large number of electrodes, which increase the dimensionality and, thereby, also the computational complexity of EEG data. It also reduces the level of comfort of the subjects. This paper implements our holographic features, investigates electrode selection, and uses the most relevant channels to maximize model accuracy. The ReliefF and Neighborhood Component Analysis (NCA) methods were used to select the optimal electrodes. Verification was performed on four publicly available datasets. Our holographic feature maps were constructed using computer-generated holography (CGH) based on the values of signal characteristics displayed in space. The resulting 2D maps are the input to the Convolutional Neural Network (CNN), which serves as a feature extraction method. This methodology uses a reduced set of electrodes, which are different between men and women, and obtains state-of-the-art results in a three-dimensional emotional space. The experimental results show that the channel selection methods improve emotion recognition rates significantly with an accuracy of 90.76% for valence, 92.92% for arousal, and 92.97% for dominance.
Deaf and hard-of-hearing people are facing many challenges in everyday life. Their communication is based on the use of a sign language, and the ability of the cultural/social environment to fully understand such a language defines whether or not it will be accessible for them. Technology is a key factor that has the potential to provide solutions to achieve a higher accessibility and therefore improve the quality of life of deaf and hard-of-hearing people. In this paper, we introduce a smart home automatization system specifically designed to provide real-time sign language recognition. The contribution of this paper implies several elements. Novel hierarchical architecture is presented, including resource-and-time-aware modules—a wake-up module and high-performance sign recognition module based on the Conv3D network. To achieve high-performance classification, multi-modal fusion of RGB and depth modality was used with the temporal alignment. Then, a small Croatian sign language database containing 25 different language signs for the use in smart home environment was created in collaboration with the deaf community. The system was deployed on a Nvidia Jetson TX2 embedded system with StereoLabs ZED M stereo camera for online testing. Obtained results demonstrate that the proposed practical solution is a viable approach for real-time smart home control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.