The prediction of skin lesions is a challenging task even for experienced dermatologists due to a little contrast between surrounding skin and lesions, the visual resemblance between skin lesions, fuddled lesion border, etc. An automated computer-aided detection system with given images can help clinicians to prognosis malignant skin lesions at the earliest time. Recent progress in deep learning includes dilated convolution known to have improved accuracy with the same amount of computational complexities compared to traditional CNN. To implement dilated convolution, we choose the transfer learning with four popular architectures: VGG16, VGG19, MobileNet, and InceptionV3. The HAM10000 dataset was utilized for training, validating, and testing, which contains a total of 10015 dermoscopic images of seven skin lesion classes with huge class imbalances. The top-1 accuracy achieved on dilated versions of VGG16, VGG19, MobileNet, and InceptionV3 is 87.42%, 85.02%, 88.22%, and 89.81%, respectively. Dilated InceptionV3 exhibited the highest classification accuracy, recall, precision, and f-1 score and dilated MobileNet also has high classification accuracy while having the lightest computational complexities. Dilated InceptionV3 achieved better overall and per-class accuracy than any known methods on skin lesions classification to the best of our knowledge while experimenting with a complex open-source dataset with class imbalances.
In the last century, we have passed two severe pandemics; the 1957 influenza (Asian flu) pandemic and the 1918 influenza (Spanish flu) pandemic with a high fatality rate. In the last few months, we have been again facing a new epidemic (COVID-19), which is a frighteningly high-risk disease and is globally threatening human lives. Among all attempts and presented solutions to tackle the COVID-19, a publicly available dataset of radiological imaging using chest radiography, also called chest X-ray (CXR) images, could efficiently accelerate the detection process of patients infected with COVID-19 through presented abnormalities in their chest radiography images. In this study, we have proposed a deep neural network (DNN), namely RAM-Net, a new combination of MobileNet with Dilated Depthwise Separable Convolution (DDSC), Residual blocks, and Attention augmented convolution. The network has been learned and validated using the COVIDx dataset, one of the most popular public datasets comprising the chest X-ray (CXR) images. Using this model, we could accurately identify the positive cases of COVID-19 viral infection while a new suspicious chest X-ray image is shown to the network. Our network's overall accuracy on the COVIDx test dataset was 95.33%, with a sensitivity and precision of 92% and 99% for COVID-19 cases, respectively, which are the highest results on the COVIDx dataset to date, to the best of our knowledge. Finally, we performed an audit on RAM-Net based on the Grad-CAM's interpretation to demonstrate that our proposed architecture detects SARS-CoV-2 (COVID-19) viral infection by focusing on vital factors rather than relying on irrelevant information.
Protein secondary structure is crucial to create an information bridge between the primary structure and the tertiary (3D) structure. Precise prediction of 8-state protein secondary structure (PSS) significantly utilized in the structural and functional analysis of proteins in bioinformatics. In this recent period, deep learning techniques have been applied in this research area and raise the Q8 accuracy remarkably. Nevertheless, from a theoretical standpoint, there still lots of room for improvement, specifically in 8-state (Q8) protein secondary structure prediction. In this paper, we presented two deep learning architecture, namely 1D-Inception and BD-LSTM, to improve the performance of 8-classes PSS prediction. The input of these two architectures is a carefully constructed feature matrix from the sequence features and profile features of the proteins. Firstly, 1D-Inception is a Deep convolutional neural network-based approach that was inspired by the InceptionV3 model and containing three inception modules. Secondly, BD-LSTM is a recurrent neural network model which including bidirectional LSTM layers. Our proposed 1D-Inception method achieved 76.65%, 71.18%, 76.86%, and 74.07% Q8 accuracy respectively on benchmark CullPdb6133, CB513, CASP10, and CASP11 datasets. Moreover, BD-LSTM acquired 74.71%, 69.49%, 74.07%, and 72.37% state-8 accuracy after evaluated on CullPdb6133, CB513, CASP10, and CASP11 datasets, respectively. Both these architectures enable the efficient processing of local and global interdependencies between amino acids to make an accurate prediction of each class is very beneficial in the deep neural network. To the best of our knowledge, experiment results of the 1D-Inception model demonstrate that it outperformed all the state-of-art methods on the benchmark CullPdb6133, CB513, and CASP10 datasets. Datasets and Methodology DatasetsHere, we utilize five different datasets, namely, CullPdb 6133, CullPdb 6133 filtered, Cb513, Casp10, and Casp11. Among these five datasets CullPdb 6133, and CullPdb 6133 filtered for training. Furthermore, CB5133, Casp10, Casp11, and 272 protein sequence of CullPdb 6133 for testing. CullPdb 6133: CullPdb 6133 [51] dataset is a non-homologous protein dataset that is provided by PISCES CullPDB with the familiar secondary structure for protein. This dataset contains a total of 6128 protein sequences, in which 5600 ([0:5600]) protein samples are considered as the training set, 272 protein samples [5605:5877] for testing, and 256 proteins samples ([5877,6133]) regarded as the validation set. Moreover, CullPdb 6133 (non-filtered) dataset has 57 features, such as amino acid residues (features [0:22)), N-and C-terminals (features [31,33)), relative and absolute solvent accessibility ([33,35)), and features of sequence profiles (features [35:57)). We used secondary structure notation (features [22:31)) for labeling. This CullPdb dataset is publicly obtainable from [2].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.