2021
DOI: 10.1021/acs.jcim.1c00537
|View full text |Cite
|
Sign up to set email alerts
|

Molecule Edit Graph Attention Network: Modeling Chemical Reactions as Sequences of Graph Edits

Abstract: The central challenge in automated synthesis planning is to be able to generate and predict outcomes of a diverse set of chemical reactions. In particular, in many cases, the most likely synthesis pathway cannot be applied due to additional constraints, which requires proposing alternative chemical reactions. With this in mind, we present Molecule Edit Graph Attention Network (MEGAN), an end-to-end encoder−decoder neural model. MEGAN is inspired by models that express a chemical reaction as a sequence of graph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
108
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(108 citation statements)
references
References 72 publications
0
108
0
Order By: Relevance
“…[see e.g. 4,5,6,7,8,9,10,11,12,13,14]. For the practitioner that wants to use a method for predictions, or for a researcher developing a novel method, there are several comparisons for a subset of the available methods.…”
Section: Introductionmentioning
confidence: 99%
“…[see e.g. 4,5,6,7,8,9,10,11,12,13,14]. For the practitioner that wants to use a method for predictions, or for a researcher developing a novel method, there are several comparisons for a subset of the available methods.…”
Section: Introductionmentioning
confidence: 99%
“…As baseline models, we compared our prediction results with five state-of-the-art retrosynthesis models: GLN (conditional graph logic network), 14 G2G (graph to graph), 15 GraphRetro , 16 MEGAN (molecule edit graph attention network), 17 and Augmented Transformer . 25 We denote our method by LocalRetro to emphasize the core idea, local reactivity prediction.…”
Section: Methods and Datasetsmentioning
confidence: 99%
“…Baselines Template-based GLN (Dai et al, 2019), templatefree G2G (Shi et al, 2020) and RetroXpert (Yan et al, 2020) are primary baselines, which not only achieve state-of-theart performance, but also provide open-source PyTorch code that allows us to verify their effectiveness. To show broad superiority, we also comapre SemiRetro with other baselines, incuding RetroSim (Coley et al, 2017b), NeuralSym (Segler & Waller, 2017), SCROP (Zheng et al, 2019), LV-Transformer (Chen et al, 2019), GraphRetro (Somnath et al, 2021), MEGAN (Sacha et al, 2021), MHNreact (Seidl et al, 2021), and Dual model (Sun et al, 2020). As the retrosynthesis task is quite complex, subtle implementation differences or mistakes may cause critical performance fluctuations.…”
Section: Basic Settingmentioning
confidence: 99%
“…For example, the top-50 semi-templates cover the case of 92.6%, while the full-templates only cover 26.8%. (Shi et al, 2020) 61.0 81.3 86.0 88.7 48.9 67.6 72.5 75.5 RetroXpert (Yan et al, 2020) 62.1 75.8 78.5 80.9 50.4 61.1 62.3 63.4 GraphRetro (Somnath et al, 2021) 63.9 81.5 85.2 88.1 53.7 68.3 72.2 75.5 MEGAN (Sacha et al, 2021) 60.7 82.0 87.5 91.6 48.1 70.7 78.4 86.1 MHNreact (Seidl et al, 2021) ----50.5 73.9 81.0 87.9 Dual (Sun et al, 2020) 65 scalable and efficient Although the reported accuracy is not optimum, SemiRetro is more scalable. The semi-template allows encoding property changes of existing atoms and bonds.…”
Section: Synthon Completion (Q2)mentioning
confidence: 99%
See 1 more Smart Citation