2022
DOI: 10.1101/2022.10.07.511322
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EquiFold: Protein Structure Prediction with a Novel Coarse-Grained Structure Representation

Abstract: Designing proteins to achieve specific functions often requires in silico modeling of their properties at high throughput scale and can significantly benefit from fast and accurate protein structure prediction. We introduce EquiFold, a new end-to-end differentiable, SE(3)-equivariant, all-atom protein structure prediction model. EquiFold uses a novel coarse-grained representation of protein structures that does not require multiple sequence alignments or protein language model embeddings, inputs that are commo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(29 citation statements)
references
References 32 publications
0
29
0
Order By: Relevance
“…For example, DeepH3 was shown to outperform TrRosetta on antibodies [28, 29], while Nanonet obtains results of similar accuracy to AlphaFold2 on nanobodies with a far simpler architecture [30]. More recent examples of this are IgFold [25] and EquiFold [31], where the authors trained antibody-specific models that predict structures of comparable accuracy to AlphaFold-Multimer.…”
Section: Structure Predictionmentioning
confidence: 99%
“…For example, DeepH3 was shown to outperform TrRosetta on antibodies [28, 29], while Nanonet obtains results of similar accuracy to AlphaFold2 on nanobodies with a far simpler architecture [30]. More recent examples of this are IgFold [25] and EquiFold [31], where the authors trained antibody-specific models that predict structures of comparable accuracy to AlphaFold-Multimer.…”
Section: Structure Predictionmentioning
confidence: 99%
“…We addressed the issue of generating structural antibody models of better physical quality without detriment to prediction of the backbone Cα. We trained a very simplistic deep learning model with a rudimentary loss function to minimize the risk of better peptide bond distances being the result of ingenious model architectures (Lee et al 2022; Ruffolo et al 2022; Abanades, Wong, et al 2022). We demonstrated that even such simple models can learn better quality peptide bonds if given the benefit of pre-training on a large augmented set of refined antibody structures.…”
Section: Discussionmentioning
confidence: 99%
“…To address this issue, but also to draw from AlphaFold2 and its derivatives (Ahdritz et al 2022; Lin et al 2022), many antibody-specific deep learning structure prediction methods have been introduced (Wilman et al 2022; Lin et al 2022). The algorithms started by addressing CDR-H3 loop prediction such as DeepH3 (Ruffolo et al 2020) and AbLooper (Abanades, Georges, et al 2022), after extending these to the entire variable domain via NanoNet (Cohen, Halfon, and Schneidman-Duhovny 2022) and the entire Fv molecule by DeepAb (Ruffolo, Sulam, and Gray 2022), IgFold (Ruffolo et al 2022), AbodyBuilder2 (Abanades, Wong, et al 2022), EquiFold (Lee et al 2022) and tFold-Ab (Wu et al 2022). As opposed to the homology methods that reported CDR-H3 root mean squared deviation (RMSD) accuracies in the region of 3-4Å (Almagro et al 2014), the deep learning methods achieve an RMSD of 2-3Å RMSD.…”
Section: Introductionmentioning
confidence: 99%
“…The database and web application TMvisDB offers a straightforward search functionality and visualization interface. While several accurate structure prediction methods have been made available over the last year [14][15][16][17], we chose to enhance TMvisDB sequence annotations with AlphaFold2 [12] predictions that have been shown to perform well in structural analysis of transmembrane proteins (TMPs) [52], and have, therefore, been successfully applied as input by resources such as the TmAlphaFold database that collects alpha-helical TMPs [34].…”
Section: Discussionmentioning
confidence: 99%