Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics 2020
DOI: 10.1145/3388440.3412471
|View full text |Cite
|
Sign up to set email alerts
|

Variational Autoencoders for Protein Structure Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…Several previous studies ( Alam et al, 2020 ; Alam and Shehu, 2020 ; Guo et al, 2020 ) focused on the evaluation of autoencoders on the generation of nonlinear featurization and the learned nonlinear representations of protein tertiary structures. In the current study, a similar strategy was employed to quantify and compare the performance of autoencoders and variational autoencoders.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several previous studies ( Alam et al, 2020 ; Alam and Shehu, 2020 ; Guo et al, 2020 ) focused on the evaluation of autoencoders on the generation of nonlinear featurization and the learned nonlinear representations of protein tertiary structures. In the current study, a similar strategy was employed to quantify and compare the performance of autoencoders and variational autoencoders.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, to evaluate the quality of deep learning models, two distance-based metrics, maximum mean discrepancy and earth mover’s distance, were applied to compare the training and generated distributions. Following the strategy from a previous study ( Alam and Shehu, 2020 ), RMSDs were calculated as a proxy variable representing the protein tertiary structures. 1) Maximum mean discrepancy (MMD).…”
Section: Methodsmentioning
confidence: 99%
“…While work on this is largely beginning, various generative deep models can be found in literature [ 17 ]. They aim to learn directly from tertiary structures typically represented as contact maps or distance matrices through primarily variational autoencoders (VAEs) [ 18 , 19 ] or generative adversarial networks (GANs) [ 20 , 21 , 22 ]. Until recently [ 23 ], the majority of these models were limited to learning from same-length protein fragments.…”
Section: Introductionmentioning
confidence: 99%
“…We note that more progress has been made recently with Variational Autoencoders (VAEs), which provide a generative framework complementary to GANs. We point here two representative works in this area [ 19 , 20 ]. However, these works train a VAE on structures generated for a specific protein molecule, and these structures are obtained from computational platforms, such as MD simulations [ 19 ] or protein structure prediction platforms, such as Rosetta [ 20 ].…”
Section: Introductionmentioning
confidence: 99%
“…We point here two representative works in this area [ 19 , 20 ]. However, these works train a VAE on structures generated for a specific protein molecule, and these structures are obtained from computational platforms, such as MD simulations [ 19 ] or protein structure prediction platforms, such as Rosetta [ 20 ]. None of these works leverage known experimental structures in the PDB, which has been the trend in the nascent sub-area of GANs for protein structure modeling as a way of learning from the actual ground truth distribution rather than other computational frameworks.…”
Section: Introductionmentioning
confidence: 99%