Background and objective: Convolutional neural networks (CNNs) play an important role in the field of medical image segmentation. Among many kinds of CNNs, the U-net architecture is one of the most famous fully convolutional network architectures for medical semantic segmentation tasks. Recent work shows that the U-net network can be substantially deeper thus resulting in improved performance on segmentation tasks. Though adding more layers directly into network is a popular way to make a network deeper, it may lead to gradient vanishing or redundant computation during training. Methods: A novel CNN architecture is proposed that integrates the Inception-Res module and densely connecting convolutional module into the U-net architecture. The proposed network model consists of the following parts: firstly, the Inception-Res block is designed to increase the width of the network by replacing the standard convolutional layers; secondly, the Dense-Inception block is designed to extract features and make the network more deep without additional parameters; thirdly, the down-sampling block is adopted to reduce the size of feature maps to accelerate learning and the up-sampling block is used to resize the feature maps. Results: The proposed model is tested on images of blood vessel segmentations from retina images, the lung segmentation of CT Data from the benchmark Kaggle datasets and the MRI scan brain tumor segmentation datasets from MICCAI BraTS 2017. The experimental results show that the proposed method can provide better performance on these two tasks compared with the state-of-the-art algorithms. The results reach an average Dice score of 0.9857 in the lung segmentation. For the blood vessel segmentation, the results reach an average Dice score of 0.9582. For the brain tumor segmentation, the results reach an average Dice score of 0.9867. Conclusions: The experiments highlighted that combining the inception module with dense connections in the U-Net architecture is a promising approach for semantic medical image segmentation.
Electroencephalogram (EEG), as a direct response to brain activity, can be used to detect mental states and physical conditions. Among various EEG-based emotion recognition studies, due to the nonlinear, non-stationary and the individual difference of EEG signals, traditional recognition methods still have the disadvantages of complicated feature extraction and low recognition rates. Thus, this paper first proposes a novel concept of electrode-frequency distribution maps (EFDMs) with short-time Fourier transform (STFT). Residual block based deep convolutional neural network (CNN) is proposed for automatic feature extraction and emotion classification with EFDMs. Aim at the shortcomings of the small amount of EEG samples and the challenge of differences in individual emotions, which makes it difficult to construct a universal model, this paper proposes a cross-datasets emotion recognition method of deep model transfer learning. Experiments carried out on two publicly available datasets. The proposed method achieved an average classification score of 90.59% based on a short length of EEG data on SEED, which is 4.51% higher than the baseline method. Then, the pre-trained model was applied to DEAP through deep model transfer learning with a few samples, resulted an average accuracy of 82.84%. Finally, this paper adopts the gradient weighted class activation mapping (Grad-CAM) to get a glimpse of what features the CNN has learned during training from EFDMs and concludes that the high frequency bands are more favorable for emotion recognition. Proposed a novel concept of EFDMs with STFT based on multiple channel EEG signals. Constructed four residual blocks based CNN for emotion recognition. Performed cross-datasets emotion recognition based on deep model transfer learning. Studied the number of training samples used for cross-datasets emotion recognition. Obtained the key EEG information automatically based on EFDMs and Grad-CAM.
Traditional myoelectric prostheses that employ a static pattern recognition model to identify human movement intention from surface electromyography (sEMG) signals hardly adapt to the changes in the sEMG characteristics caused by interferences from daily activities, which hinders the clinical applications of such prostheses. In this paper, we focus on methods to reduce or eliminate the impacts of three types of daily interferences on myoelectric pattern recognition (MPR), i.e., outlier motion, muscle fatigue, and electrode doffing/donning. We constructed an adaptive incremental hybrid classifier (AIHC) by combining one-class support vector data description and multiclass linear discriminant analysis in conjunction with two specific update schemes. We developed an AIHCbased MPR strategy to improve the robustness of MPR against the three interferences. Extensive experiments on hand-motion recognition were conducted to demonstrate the performance of the proposed method. Experimental results show that the AIHC has significant advantages over non-adaptive classifiers under various interferences, with improvements in the classification accuracy ranging from 7.1% to 39% (p < 0.01). The additional evaluations on data deviations demonstrate that the AIHC can accommodate large-scale changes in the sEMG characteristics, revealing the potential of the AIHC-based MPR strategy in the development of clinical myoelectric prostheses. Index Terms-Surface electromyography (sEMG), myoelectric prosthesis, adaptive classifier, online update. I. INTRODUCTION M YOELECTRIC pattern recognition (MPR) that can decipher movement intention from electromyography signals, has been regarded as an outstanding way to realize
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.