2022
DOI: 10.48550/arxiv.2209.14734
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DiGress: Discrete Denoising diffusion for graph generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(28 citation statements)
references
References 0 publications
0
16
0
Order By: Relevance
“…Besides the DPMs for continuous data, some works study discrete DPMs and achieve impressive results on generating texts (Austin et al, 2021;Li et al, 2022), graphs (Vignac et al, 2022) and images (Hoogeboom et al, 2021). Inspired by these progresses, DPMs have been adopted to solve problems in chemistry and biology domain, including molecule generation (Xu et al, 2022b;Hoogeboom et al, 2022;Wu et al, 2022c;Jing et al, 2022), molecular representation learning (Liu et al, 2022), protein structure prediction (Wu et al, 2022b), protein-ligand binding (Corso et al, 2022), protein design (Anand & Achim, 2022;Luo et al, 2022;Ingraham et al, 2022;Watson et al, 2022) and motif-scaffolding (Trippe et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Besides the DPMs for continuous data, some works study discrete DPMs and achieve impressive results on generating texts (Austin et al, 2021;Li et al, 2022), graphs (Vignac et al, 2022) and images (Hoogeboom et al, 2021). Inspired by these progresses, DPMs have been adopted to solve problems in chemistry and biology domain, including molecule generation (Xu et al, 2022b;Hoogeboom et al, 2022;Wu et al, 2022c;Jing et al, 2022), molecular representation learning (Liu et al, 2022), protein structure prediction (Wu et al, 2022b), protein-ligand binding (Corso et al, 2022), protein design (Anand & Achim, 2022;Luo et al, 2022;Ingraham et al, 2022;Watson et al, 2022) and motif-scaffolding (Trippe et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Then, attention weights are multiplied with V m1 to create the final representation of the annotation matrix. The new representation of the adjacency matrix is the concatenated version of the [45,46]. For our default model, output dimension size of the transformer is 128 for both the annotation and adjacency.…”
Section: Gan1 Generatormentioning
confidence: 99%
“…Baselines. We compare our HGGT with twelve deep graph generative models: GraphVAE [5], GraphRNN [18], GNF [37], GRAN [23], EDP-GNN [24], GraphGen [13], GraphAF [25], GraphDF [26], SPECTRE [35], GDSS [8], DiGress [9], and GDSM [36]. We provide a detailed description of the implementations in Appendix E.…”
Section: Generic Graph Generationmentioning
confidence: 99%
“…Baselines. We compare HGGT with eight deep graph generative models: EDP-GNN [24], MoFlow [39], GraphAF [25], GraphDF [26], GraphEBM [7], GDSS [8], DiGress [9], and GDSM [36]. We provide a detailed description of implementation in Appendix E. Results.…”
Section: Molecular Graph Generationmentioning
confidence: 99%