Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3219882
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Prototype-Based Drug Discovery using Conditional Diversity Networks

Abstract: Designing a new drug is a lengthy and expensive process. As the space of potential molecules is very large (10 23 − 10 60 ), a common technique during drug discovery is to start from a molecule which already has some of the desired properties. An interdisciplinary team of scientists generates hypothesis about the required changes to the prototype. In this work, we develop an algorithmic unsupervised-approach that automatically generates potential drug molecules given a prototype drug. We show that the molecule… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…Conditional Diversity Networks and prototype‐driven diversity networks are VAE‐based networks set by Harel and Radinsky. The encoder had convolutions, acting like substructure filters, and a diversity layer.…”
Section: Deep Learning For Molecular Generationmentioning
confidence: 99%
“…Conditional Diversity Networks and prototype‐driven diversity networks are VAE‐based networks set by Harel and Radinsky. The encoder had convolutions, acting like substructure filters, and a diversity layer.…”
Section: Deep Learning For Molecular Generationmentioning
confidence: 99%
“…For example, to translate Japanese to English correctly, the ED model must understand not only the characters of both Japanese and English but also the context of the strings [11,12]. The latent representation of NMT models learning SMILES includes the context of SMILES, i.e., the entire chemical structure [13][14][15].…”
Section: Introductionmentioning
confidence: 99%
“…For example, in order to correctly translate Japanese to English, the ED model must understand not only the characters of both Japanese and English but also the context of the strings [11,12]. The latent representation of NMT models learning SMILES includes the context of SMILES, i.e., the entire chemical structure [13][14][15].…”
Section: Introductionmentioning
confidence: 99%