2012
DOI: 10.1016/j.acha.2011.08.001
|View full text |Cite
|
Sign up to set email alerts
|

Multi-scale geometric methods for data sets II: Geometric Multi-Resolution Analysis

Abstract: Data sets are often modeled as samples from a probability distribution in R D , for D large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a d-dimensional manifold M, with d much smaller than D. When M is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of d vectors in R D (for example found by SVD), at a cost (n + D)d for n data points. When M is nonlinear, there are no "explicit"… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
140
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 109 publications
(141 citation statements)
references
References 60 publications
1
140
0
Order By: Relevance
“…Our work is based on Geometric Multi-Resolution Analysis [1], [2], which yields a multiscale dictionary construction and representation for high-dimensional data sets that are nearly lowdimensional. GMRA is extremely efficient, both in terms of use of samples [3] and computationally [2], and easily updatable [2] (thanks to the cover tree [4] algorithm used for the multiscale partitions, and standard algorithms for online updates of randomized principal component analysis).…”
Section: Introductionmentioning
confidence: 99%
“…Our work is based on Geometric Multi-Resolution Analysis [1], [2], which yields a multiscale dictionary construction and representation for high-dimensional data sets that are nearly lowdimensional. GMRA is extremely efficient, both in terms of use of samples [3] and computationally [2], and easily updatable [2] (thanks to the cover tree [4] algorithm used for the multiscale partitions, and standard algorithms for online updates of randomized principal component analysis).…”
Section: Introductionmentioning
confidence: 99%
“…According to the relevant contents of sparse theory, all signals can be sparsely represented, namely compressed. The sparse decomposition algorithm and the design of the sparse dictionary are the two main aspects of sparse representation [12]. From the above analysis, the general procedures of image de-noising based on sparse decomposition can be summarized as follows:…”
Section: Sparse Representation Theorymentioning
confidence: 99%
“…In many learning theory problems, a class of data may form a complex structure embedded in a high dimensional space R [47][48][49][50][51][52][53]. In the neighborhood of each data point, the structure may be modeled by a local tangent space, or a union of tangent spaces whose dimensions are much smaller than the dimension of the ambient space R [16]. The global shape of the data model can then be obtained from the observed data points by solving Problem 1.…”
Section: Connection To Learning Theory and Datamentioning
confidence: 99%
“…For this situation (further developed below), a 4-dimensional subspace is assigned to each moving object in a space H = R 2 , where is the number of frames in the video. Examples where H is infinite dimensional arise in sampling theory, and in learning theory [15][16][17][18][19]. For example, signals with finite rate of innovations are modeled by a union of subspaces that belongs to an infinite dimensional space such as 2 (R ) [2,3,20,21].…”
Section: Introductionmentioning
confidence: 99%