2019 53rd Asilomar Conference on Signals, Systems, and Computers 2019
DOI: 10.1109/ieeeconf44664.2019.9048908
|View full text |Cite
|
Sign up to set email alerts
|

Deep Encoder-Decoder Neural Network Architectures for Graph Output Signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…The second architecture performs graph upsampling operations to progressively increase the size of the input until it matches the size of the observed signal. The upsampling operators are based on hierarchical clustering algorithms [15], [23]- [25] so that, in contrast with [26], matrix inversions are not required, avoiding the related numerical issues. Our work is substantially different from [15], [16], which deal with graph encoder-decoder architectures.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The second architecture performs graph upsampling operations to progressively increase the size of the input until it matches the size of the observed signal. The upsampling operators are based on hierarchical clustering algorithms [15], [23]- [25] so that, in contrast with [26], matrix inversions are not required, avoiding the related numerical issues. Our work is substantially different from [15], [16], which deal with graph encoder-decoder architectures.…”
Section: Introductionmentioning
confidence: 99%
“…The upsampling operators are based on hierarchical clustering algorithms [15], [23]- [25] so that, in contrast with [26], matrix inversions are not required, avoiding the related numerical issues. Our work is substantially different from [15], [16], which deal with graph encoder-decoder architectures. On top of our theoretical analysis and extensive numerical simulations, additional differences to prior work are that: (a) our graph decoder is an untrained network, and thus, it does not need a training phase; (b) we only require a decoder-like architecture for denoising graph signals, so it is not necessary to jointly design and train two different architectures as done in [15], [16].…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, signal processing on graphs extends concepts and techniques from traditional signal processing to data indexed by generic graphs, see [17]. For example, neural networks and graph signal processing have emerged as important actors in data-science applications dealing with complex datasets, see [18].…”
Section: Introductionmentioning
confidence: 99%