2021
DOI: 10.26434/chemrxiv.13234289.v3
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparative Study of Deep Generative Models on Chemical Space Coverage (v18)

Abstract: <p>In recent years, deep molecular generative models have emerged as novel methods for <i>de novo</i> molecular design. Thanks to the rapid advance of deep learning techniques, deep learning architectures such as recurrent neural networks, generative autoencoders, and adversarial networks, to give a few examples, have been employed for constructing generative models. However, so far the metrics used to evaluate these deep generative models are not discriminative enough to separate the perform… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
(40 reference statements)
0
3
0
Order By: Relevance
“…We have shown our model is able to perform well in several tasks most notably promoting the generation of DRD2 active molecules. While favouring certain properties, our RL framework also improves other performance metrics including increasing the percentage of valid and properly terminated molecules, reaching validity rates comparable to that of state-of-the-art models (Brown et al, 2019;Polykovskiy et al, 2020;Zhang et al, 2021).…”
Section: Discussionmentioning
confidence: 87%
See 1 more Smart Citation
“…We have shown our model is able to perform well in several tasks most notably promoting the generation of DRD2 active molecules. While favouring certain properties, our RL framework also improves other performance metrics including increasing the percentage of valid and properly terminated molecules, reaching validity rates comparable to that of state-of-the-art models (Brown et al, 2019;Polykovskiy et al, 2020;Zhang et al, 2021).…”
Section: Discussionmentioning
confidence: 87%
“…The main drawback of the proposed model is the amount of time and computational power needed to pre-train the underlying GraphINVENT model (a few days on an NVIDIA Tesla K80); however, this is on par with that needed for other state-of-the-art molecular DGMs (Zhang et al, 2021), and only has to be done once per dataset. After pre-training, fine-tuning the model with RL is comparatively quick and requires only between 10 − 40 minutes, where scoring the model is the main bottleneck.…”
Section: Discussionmentioning
confidence: 99%
“…Cieplinski et al proposed to better represent real discovery problems by using docking as a benchmark of the different methods of goal-directed generation. Zhang et al improved on their earlier work, measuring the coverage of chemical space by generative models, by also evaluating the coverage of functional groups and ring systems. Furthermore, the team provided results for various, recently introduced, generative model architectures allowing for their comparison as well as providing useful baselines for future works.…”
Section: Deep Learning Models For De Novo Molecular Designmentioning
confidence: 99%