ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9054425
|View full text |Cite
|
Sign up to set email alerts
|

Graph Construction from Data by Non-Negative Kernel Regression

Abstract: Data driven graph constructions are often used in various applications, including several machine learning tasks, where the goal is to make predictions and discover patterns. However, learning an optimal graph from data is still a challenging task. Weighted K-nearest neighbor and -neighborhood methods are among the most common graph construction methods, due to their computational simplicity but the choice of parameters such as K and associated with these methods is often ad hoc and lacks a clear interpretatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
42
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 28 publications
(43 citation statements)
references
References 24 publications
1
42
0
Order By: Relevance
“…Note that the proposed methodologies use straightforward techniques to build LGGs; thus, they could be enriched with more principled approaches [41,42]. Another area of interest would be to build upon Reference [15] and see what improvements may arise from the use of graph convolutional networks in domains that are not typically supported by graphs.…”
Section: Discussionmentioning
confidence: 99%
“…Note that the proposed methodologies use straightforward techniques to build LGGs; thus, they could be enriched with more principled approaches [41,42]. Another area of interest would be to build upon Reference [15] and see what improvements may arise from the use of graph convolutional networks in domains that are not typically supported by graphs.…”
Section: Discussionmentioning
confidence: 99%
“…We present a data summarization approach using DL that draws ideas from kMeans and our work on NNK neighborhood [11]. We propose a two-stage learning scheme where we iteratively solve sparse coding and dictionary update until convergence, or until a predefined number of iteration or reconstruction error is reached.…”
Section: Proposed Method: Nnk-meansmentioning
confidence: 99%
“…Solving for each Wi in equation ( 4) involves working with a N × N kernel matrix leading to run times that scale poorly with the size of the dataset. However, the geometric understanding of the above nonnegative kernel regression objective in [11], allows us to efficiently solve for the sparse coefficients (Wi) for each data point by selecting and working with a smaller subset of data points. Objective (4) We see that the kSVD approach is unable to adapt to the nonlinear structure of the data and adding a kernel is crucial for the kSVD approach to perform well.…”
Section: Proposed Method: Nnk-meansmentioning
confidence: 99%
See 2 more Smart Citations