2019 13th International Conference on Sampling Theory and Applications (SampTA) 2019
DOI: 10.1109/sampta45681.2019.9030994
|View full text |Cite
|
Sign up to set email alerts
|

Compressed Diffusion

Abstract: Diffusion maps are a commonly used kernel-based method for manifold learning, which can reveal intrinsic structures in data and embed them in low dimensions. However, as with most kernel methods, its implementation requires a heavy computational load, reaching up to cubic complexity in the number of data points. This limits its usability in modern data analysis. Here, we present a new approach to computing the diffusion geometry, and related embeddings, from a compressed diffusion process between data regions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

6
0

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 6 publications
0
6
0
Order By: Relevance
“…However, since storing the entries of the powered diffusion operator in memory is also an issue, we employ the use of landmarks earlier in the process. It has also been shown that "compressing" the process of diffusion through landmarks in the fashion described here performs better than simply applying Nystrom extension (which includes landmark MDS [66]) to diffusion maps [68].…”
Section: Scalability Of Phatementioning
confidence: 99%
“…However, since storing the entries of the powered diffusion operator in memory is also an issue, we employ the use of landmarks earlier in the process. It has also been shown that "compressing" the process of diffusion through landmarks in the fashion described here performs better than simply applying Nystrom extension (which includes landmark MDS [66]) to diffusion maps [68].…”
Section: Scalability Of Phatementioning
confidence: 99%
“…However, since storing the entries of the powered diffusion operator in memory is also an issue, we employ the use of landmarks earlier in the process. It has also been shown that "compressing" the process of diffusion through landmarks in the fashion described here performs better than simply applying Nystrom extension (which includes landmark MDS [82]) to diffusion maps [84].…”
Section: A13 Scalability Of Phatementioning
confidence: 99%
“…On this initial coarse-graining we compute the diffusion potential coordinates by employing landmarking as developed in [19]. Landmarking refers to the idea that instead of computing diffusion probabilities between every pair of points, we can compute diffusion probabilities from points to a well-chosen set of central “landmarks” that maintain the geometry of the data.…”
Section: Resultsmentioning
confidence: 99%
“…We have shown in [13] that this leads to high quality approximations of the diffusion operator which lead to near-identical visualizations with PHATE. In addition, we examined in [19] that this leads to low error approximations of diffusion operators in general. We use this fast approach to compute a low error diffusion potential system for our coarse graining process.…”
Section: Methodsmentioning
confidence: 99%