2019
DOI: 10.1101/786921
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SAINT: Self-Attention Augmented Inception-Inside-Inception Network Improves Protein Secondary Structure Prediction

Abstract: Protein structures provide basic insight into how they can interact with other proteins, their functions and biological roles in an organism. Experimental methods (e.g., X-ray crystallography, nuclear magnetic resonance spectroscopy) for predicting the secondary structure (SS) of proteins are very expensive and time consuming. Therefore, developing efficient computational approaches for predicting the secondary structure of protein is of utmost importance. Advances in developing highly accurate SS prediction m… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 76 publications
0
4
0
Order By: Relevance
“…We have tuned hyperparameters for DeepAffinity variants including learning rate ({10 −3 , 10 −4 }), batch size ({64, 128} (16 for CNN-GCN because of the limit of GPU memory) and dropout rate ({0.1, 0.2}) using random 10% of training data as validation sets. When HRNN was used to model protein sequences, we have also tuned k-mer lengths and group sizes in pairs [{ (40,30), (48,25), (30,40), (25,48), (15,80), (80,15)} for Davis and { (40,25), (50,20), (25,40), (20,50)} for KIBA and PDBbind] using the validation sets.…”
Section: ■ Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We have tuned hyperparameters for DeepAffinity variants including learning rate ({10 −3 , 10 −4 }), batch size ({64, 128} (16 for CNN-GCN because of the limit of GPU memory) and dropout rate ({0.1, 0.2}) using random 10% of training data as validation sets. When HRNN was used to model protein sequences, we have also tuned k-mer lengths and group sizes in pairs [{ (40,30), (48,25), (30,40), (25,48), (15,80), (80,15)} for Davis and { (40,25), (50,20), (25,40), (20,50)} for KIBA and PDBbind] using the validation sets.…”
Section: ■ Resultsmentioning
confidence: 99%
“…Additionally, attention mechanisms have been used for predictions of CPI, 18 chemical stability, 19 and protein secondary structures. 20 Assessment of interpretability for all these studies was either lacking or limited to a few case studies. We note a recent work proposing post-hoc attribution-based test to determine whether a model learns binding mechanisms.…”
Section: ■ Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, attention mechanisms have been used for predictions of CPI, 13 chemical stability 14 and protein secondary structures. 15 Assessment of interpretability for all these studies was either lacking or limited to few case studies. We note a recent work proposing post-hoc attribution-based test to determine whether a model learns binding mechanisms.…”
Section: Introductionmentioning
confidence: 99%