Nowadays, motor imagery (MI) electroencephalogram (EEG) signal classification has become a hotspot in the research field of brain computer interface (BCI). More recently, deep learning has emerged as a promising technique to automatically extract features of raw MI EEG signals and then classify them. However, deep learning-based methods still face two challenging problems in practical MI EEG signal classification applications: (1) Generally, training a deep learning model successfully needs a large amount of labeled data. However, most of the EEG signal data is unlabeled and it is quite difficult or even impossible for human experts to label all the signal samples manually. (2) It is extremely time-consuming and computationally expensive to train a deep learning model from scratch. To cope with these two challenges, a deep transfer convolutional neural network (CNN) framework based on VGG-16 is proposed for EEG signal classification. The proposed framework consists of a VGG-16 CNN model pre-trained on the ImageNet and a target CNN model which shares the same structure with VGG-16 except for the softmax output layer. The parameters of the pre-trained VGG-16 CNN model are directly transferred to the target CNN model used for MI EEG signal classification. Then, front-layers parameters in the target model are frozen, while later-layers parameters are fine-tuned by the target MI dataset. The target dataset is composed of timefrequency spectrum images of EEG signals. The performance of the proposed framework is verified on the public benchmark dataset 2b from the BCI competition IV. The experimental results show that the proposed framework improves the accuracy and efficiency performance of EEG signal classification compared with traditional methods, including support vector machine (SVM), artificial neural network (ANN), and standard CNN. INDEX TERMS Motor imagery (MI), electroencephalogram (EEG), signal classification, short time Fourier transform (STFT), VGG-16, transfer learning.
Three-dimensional convolutional neural networks (3DCNNs), a rapidly evolving modality of deep learning, has gained popularity in many fields. For oral cancers, CT images are traditionally processed using two-dimensional input, without considering information between lesion slices. In this paper, we established a 3DCNNs-based image processing algorithm for the early diagnosis of oral cancers, which was compared with a 2DCNNs-based algorithm. The 3D and 2D CNNs were constructed using the same hierarchical structure to profile oral tumors as benign or malignant. Our results showed that 3DCNNs with dynamic characteristics of the enhancement rate image performed better than 2DCNNS with single enhancement sequence for the discrimination of oral cancer lesions. Our data indicate that spatial features and spatial dynamics extracted from 3DCNNs may inform future design of CT-assisted diagnosis system. INDEX TERMS 2DCNNs, 3DCNNs, CT images, spatial features, spatial dynamics extracted.
The automatic segmentation of the skin lesion on dermoscopy images is an important step for diagnosing the melanoma. However, the skin lesion segmentation is still a challenging task due to the blur lesion border, low contrast between the skin cancer region and normal tissue background, and various sizes of cancer regions. In this paper, we propose a deep supervised multi-scale network (DSM-Network), which achieves satisfied skin cancer segmentation result by utilizing the side-output layers of the network to aggregate information from shallow&deep layers, and designing a multi-scale connection block to handle a variety of cancer sizes' changes. Moreover, a post-processing of the contour refinement strategy is adopted by a conditional random field (CRF) model to further improve the segmentation results. Extensive experiments on two public datasets: ISBI 2017 and PH2 have demonstrated that our designed DSM-Network has gained competitive performance compared with other state-of-the-art methods.INDEX TERMS Skin cancer, dermoscopy image, deep supervised learning, multi-scale feature, conditional random field.
Automatic segmentation of lumen contour plays an important role in medical imaging and diagnosis, which is the first step towards the evaluation of morphology of vessels under analysis and the identification of possible atherosclerotic lesions. Meanwhile, quantitative information can only be obtained with segmentation, contributing to the appearance of novel methods which can be successfully applied to intravascular optical coherence tomography (IVOCT) images. This paper proposed a new end-to-end neural network (N-Net) for the automatic lumen segmentation, using multi-scale features based deep neural network, for IVOCT images. The architecture of the N-Net contains a multi-scale input layer, a N-type convolution network layer and a cross-entropy loss function. The multi-scale input layer in the proposed N-Net is designed to avoid the loss of information caused by pooling in traditional U-Net and also enriches the detailed information in each layer. The N-type convolutional network is proposed as the framework in the whole deep architecture. Finally, the loss function guarantees the degree of fidelity between the output of proposed method and the manually labeled output. In order to enlarge the training set, data augmentation is also introduced. We evaluated our method against loss, accuracy, recall, dice similarity coefficient, jaccard similarity coefficient and specificity. The experimental results presented in this paper demonstrate the superior performance of the proposed N-Net architecture, comparing to some existing networks, for enhancing the precision of automatic lumen segmentation and increasing the detailed information of edges of the vascular lumen.
Despite the new ideas were inspired in medical treatment by the rapid advancement of three-dimensional (3D) printing technology, there is still rare research work reported on 3D printing of coronary arteries being documented in the literature. In this work, the application value of 3D printing technology in the treatment of cardiovascular diseases has been explored via comparison study between the 3D printed vascular solid model and the computer aided design (CAD) model. In this paper, a new framework is proposed to achieve a 3D printing vascular model with high simulation. e patient-speci c 3D reconstruction of the coronary arteries is performed by the detailed morphological information abstracted from the contour of the vessel lumen. In the process of reconstruction which has 5 steps, the morphological details of the contour view of the vessel lumen are merged along with the curvature and length information provided by the coronary angiography. After comparing with the diameter of the narrow section and the diameter of the normal section in CAD models and 3D printing model, it can be concluded that there is a high correlation between the diameter of vascular stenosis measured in 3D printing models and computer aided design models. e 3D printing model has high-modeling ability and high precision, which can represent the original coronary artery appearance accurately. It can be adapted for prevascularization planning to support doctors in determining the surgical procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.