2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7953410
|View full text |Cite
|
Sign up to set email alerts
|

Learning sparse graphs under smoothness prior

Abstract: In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
103
0

Year Published

2018
2018
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 115 publications
(103 citation statements)
references
References 16 publications
0
103
0
Order By: Relevance
“…Importantly, (10) together with (5) . The neighborhood-based lasso method in [36] cycles over vertices i = 1, .…”
Section: Graph Selection Via Neighborhood-based Sparse Linear Regrmentioning
confidence: 99%
See 2 more Smart Citations
“…Importantly, (10) together with (5) . The neighborhood-based lasso method in [36] cycles over vertices i = 1, .…”
Section: Graph Selection Via Neighborhood-based Sparse Linear Regrmentioning
confidence: 99%
“…, x P ] ∈ R N ×P are assumed to vary smoothly on the sparse graph G and the actual number of edges |E| is assumed to be considerably smaller than the maximum possible number of edges M := N 2 = N (N − 1)/2. We describe the approach in [5], whose idea is to cast the graph learning problem as one of edge subset selection. As we show in the sequel, it is possible to parametrize the unknown graph topology via a sparse edge selection vector.…”
Section: Graph Learning As An Edge Subset Selection Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…In GSP tasks such as filtering or signal recovery from samples, smoothness with respect to the chosen shift matrix is a key assumption. Therefore, smoothness is used also in the shift operator learning from training data, as for example in [13]. Graph autocorrelation is a measure of smoothness which we will discuss here, since in the next section in the context of independent component analysis, we deal with a related concept for multivariate signals, graph autocorrelation matrix.…”
Section: Graph Autocovariance Of Gma Signalsmentioning
confidence: 99%
“…The derivations of the expected values in the other two equations in (10) are more tedious and a lot lengthier, but otherwise follow the same steps as the derivation of (13). Thus, we skip these derivations here for the sake of brevity, and write down the solution for the system of equations (10).…”
Section: Graph Autocorrelation Of Gma Signalsmentioning
confidence: 99%