2019
DOI: 10.1109/tsp.2019.2896229
|View full text |Cite
|
Sign up to set email alerts
|

Graph Topology Inference Based on Sparsifying Transform Learning

Abstract: Graph-based representations play a key role in machine learning. The fundamental step in these representations is the association of a graph structure to a dataset. In this paper, we propose a method that aims at finding a block sparse representation of the graph signal leading to a modular graph whose Laplacian matrix admits the found dictionary as its eigenvectors. The role of sparsity here is to induce a bandlimited representation or, equivalently, a modular structure of the graph. The proposed strategy is … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
37
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 67 publications
(37 citation statements)
references
References 34 publications
0
37
0
Order By: Relevance
“…Some other research works with similar ideas are presented for different stationary and non-stationary processes in [26][27][28][29][30]. Dictionary learning [31,32] and transform learning [33] have also been used for inferring the graph topology. In [31][32][33], a specific relation between the Laplacian matrix and the dictionary atoms has been sought and hence these algorithms are applicable when we have some knowledge about signal representation in Fig.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some other research works with similar ideas are presented for different stationary and non-stationary processes in [26][27][28][29][30]. Dictionary learning [31,32] and transform learning [33] have also been used for inferring the graph topology. In [31][32][33], a specific relation between the Laplacian matrix and the dictionary atoms has been sought and hence these algorithms are applicable when we have some knowledge about signal representation in Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Dictionary learning [31,32] and transform learning [33] have also been used for inferring the graph topology. In [31][32][33], a specific relation between the Laplacian matrix and the dictionary atoms has been sought and hence these algorithms are applicable when we have some knowledge about signal representation in Fig. 1 Illustration of learning a network graph structure from IoT data 1 The GSO is a matrix which captures the graph's local topology and the graph Fourier transform is defined using its eigenvectors.…”
Section: Introductionmentioning
confidence: 99%
“…For the directed communication graph with n agents, there are n 2 −n weighted communication links (variables) [L] ij , i = j, need to be inferred. In (7), both matricesLU + UL ⊤ and V are symmetric. Thus, (7) contains, at most, (n+1)n 2 distinct linear equations that are related to one or some [L] ij , i = j.…”
Section: Related Prior Workmentioning
confidence: 99%
“…In (7), both matricesLU + UL ⊤ and V are symmetric. Thus, (7) contains, at most, (n+1)n 2 distinct linear equations that are related to one or some [L] ij , i = j. Moreover, we note that the constraint conditionsL1 n = 1 n and [L] ij ≥ 0 in (7) only reduce the number of linear equations while do not affect the number of variables to be inferred.…”
Section: Related Prior Workmentioning
confidence: 99%
“…Based on this model and criterion, we proceed in two phases -we first provide an algorithm for estimating the eigenvectors of the graph matrix and then proceed to find the eigenvalues from the eigenvectors in the second phase [9]. In principle, our model in this paper is similar to the block sparsity models of [16], [17] where all signals are assumed to have the same frequency support and behave smoothly with respect to the underlying graph structure; however, the crucial difference is that we allow our data to have arbitrary frequency support anywhere within the entire frequency range. We refer to these as wideband graph signals.…”
Section: Introductionmentioning
confidence: 99%