2020
DOI: 10.1109/access.2020.3018033
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attention-Based Variational Graph Autoencoder

Abstract: Autoencoders have been successfully used for graph embedding, and many variants have been proven to effectively express graph data and conduct graph analysis in low-dimensional space. However, previous methods ignore the structure and properties of the reconstructed graph, or they do not consider the potential data distribution in the graph, which typically leads to unsatisfactory graph embedding performance. In this paper, we propose the adversarial attention variational graph autoencoder (AAVGA), which is a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…As per the results in Table 7, it is evident that the proposed method is superior to the others in terms of SSIM. However, RDN [44] demonstrate higher PSNR values. While DCRN shows a better PSNR-B compared to that of DnCNN, it has comparable performance with DCSC in terms of PSNR-B using the Classic5 dataset.…”
Section: Resultsmentioning
confidence: 83%
See 3 more Smart Citations
“…As per the results in Table 7, it is evident that the proposed method is superior to the others in terms of SSIM. However, RDN [44] demonstrate higher PSNR values. While DCRN shows a better PSNR-B compared to that of DnCNN, it has comparable performance with DCSC in terms of PSNR-B using the Classic5 dataset.…”
Section: Resultsmentioning
confidence: 83%
“…All experiments were performed on an Intel Xeon Gold 5120 (14 cores @ 2.20 GHz) with 177 GB RAM and two NVIDIA Tesla V100 GPUs under the experimental environment described in Table 5. In terms of the performance of image restoration, we compared the proposed DCRN with JPEG, ARCNN [30], DnCNN [33], DCSC [42], IDCN [43] and RDN [44]. In terms of the AR performance (i.e., PSNR and SSIM), the number of parameters and total memory size, the performance comparisons between the proposed and existing methods are depicted in Figure 8.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Weng, Z [15] depicts a variational graph auto-encoder that has an attention-based mechanism. To improve the encoder performance, they employ efficient weights for each node of the network and amplify them by analyzing which leads to improvements in the performance of the auto-encoder.…”
Section: A Related Workmentioning
confidence: 99%