The use of deep learning for fault diagnosis is already a common approach. However, integrating discriminative information of fault types and scales into deep learning models for rich multitask fault feature diagnosis still deserves attention. In this study, a deep multitask-based multiscale feature fusion network model (MEAT) is proposed to address the limitations and poor adaptability of traditional convolutional neural network models for complex jobs. The model performed multidimensional feature extraction through convolution at different scales to obtain different levels of fault information, used a hierarchical attention mechanism to weight the fusion of features to achieve an accuracy of 99.95% for the total task of fault six classification, and considered two subtasks in fault classification to discriminate fault size and fault type through multi-task mapping decomposition. Of these, the highest accuracy of fault size classification reached 100%. In addition, Precision, ReCall, and Sacore F1 all reached the index of 1, which achieved the accurate diagnosis of bearing faults.
Emotion recognition is essential for computers to understand human emotions. Traditional EEG emotion recognition methods have significant limitations. To improve the accuracy of EEG emotion recognition, we propose a multiview feature fusion attention convolutional recurrent neural network (multi-aCRNN) model. Multi-aCRNN combines CNN, GRU, and attention mechanisms to fuse features from multiple perspectives deeply. Specifically, multiscale CNN can unite elements in the frequency and spatial domains through the convolution of different scales. The role of the attention mechanism is to weigh the frequency domain and spatial domain information of different periods to find more valuable temporal perspectives. Finally, the implicit feature representation is learned from the time domain through the bidirectional GRU to achieve the profound fusion of features from multiple perspectives in the time domain, frequency domain, and spatial domain. At the same time, for the noise problem, we use label smoothing to reduce the influence of label noise to achieve a better emotion recognition classification effect. Finally, the model is validated on the EEG data of 32 subjects on a public dataset (DEAP) by fivefold cross-validation. Multi-aCRNN achieves an average classification accuracy of 96.43% and 96.30% in arousal and valence classification tasks, respectively. In conclusion, multi-aCRNN can better integrate EEG features from different angles and provide better classification results for emotion recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.