2022
DOI: 10.1016/j.knosys.2021.107771
|View full text |Cite
|
Sign up to set email alerts
|

Protein secondary structure prediction using a lightweight convolutional network and label distribution aware margin loss

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(25 citation statements)
references
References 39 publications
0
25
0
Order By: Relevance
“…MUFOLD-SS ( Fang et al, 2018 ) uses a Deep inception-inside-inception (Deep3I) network to handle local and global dependencies in sequences. ShuffleNet_SS ( Yang et al, 2022 ) uses a lightweight convolutional network and label distribution aware margin loss to improve the network’s learning ability for rare classes. For a fair comparison, we use our dataset for training in experiments, where the input is the hybrid feature PSSM + one-hot.…”
Section: Methodsmentioning
confidence: 99%
“…MUFOLD-SS ( Fang et al, 2018 ) uses a Deep inception-inside-inception (Deep3I) network to handle local and global dependencies in sequences. ShuffleNet_SS ( Yang et al, 2022 ) uses a lightweight convolutional network and label distribution aware margin loss to improve the network’s learning ability for rare classes. For a fair comparison, we use our dataset for training in experiments, where the input is the hybrid feature PSSM + one-hot.…”
Section: Methodsmentioning
confidence: 99%
“…In this section, we compare the proposed method with five state-of-the-art methods DCRNN, 42 CNN_BIGRU, 43 DeepACLSTM, 44 MUFOLD-SS 45 and ShuffleNet_SS 46 on seven test sets CullPDB, CASP10, CASP11, CASP12, CASP13, CASP14 and CB513. The DCRNN model utilizes a multi-scale convolutional neural network to extract local features and a bidirectional neural network composed of gated units to extract global features.…”
Section: Methodsmentioning
confidence: 99%
“…However, this standard practice raises an issue: the networks will change the feature vectors corresponding to the padded positions to non-zero vectors during forward propagation, affecting the prediction result of the non-padded positions. To address this issue, Yang et al [166] proposed a modified 1D batch normalization by exploiting a mask matrix, so that padding positions in the feature vectors do not participate in normalization operations. Hence, the feature vectors in these padding positions remain all-zeros.…”
Section: Pssp In Post-alphafold Publicationmentioning
confidence: 99%
“…Hence, the feature vectors in these padding positions remain all-zeros. Furthermore, they [166] also proposed a PSSP model with architecture comprised of CNNs and fully connected layers along with the implementation of masked batch normalization.…”
Section: Pssp In Post-alphafold Publicationmentioning
confidence: 99%