2013 IEEE Global Conference on Signal and Information Processing 2013
DOI: 10.1109/globalsip.2013.6737055
|View full text |Cite
|
Sign up to set email alerts
|

Nearly optimal linear embeddings into very low dimensions

Abstract: Abstract-We propose algorithms for constructing linear embeddings of a finite dataset V ⊂ R d into a k-dimensional subspace with provable, nearly optimal distortions. First, we propose an exhaustive-search-based algorithm that yields a k-dimensional linear embedding with distortion at most opt(k)+δ, for any δ > 0 where opt(k) is the smallest achievable distortion over all possible orthonormal embeddings. This algorithm is space-efficient and can be achieved by a single pass over the data V . However, the runti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…However, this seems to an extremely challenging analytical problem for a generic point set X . For some initial progress in this direction (albeit under somewhat more restrictive settings), see the recent results by [32] and [33]. The main question is to verify the efficiency of the convex relaxation (4), which is essentially an SDP with rank-1 constraints (specified by the secant set S(X )).…”
Section: B Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…However, this seems to an extremely challenging analytical problem for a generic point set X . For some initial progress in this direction (albeit under somewhat more restrictive settings), see the recent results by [32] and [33]. The main question is to verify the efficiency of the convex relaxation (4), which is essentially an SDP with rank-1 constraints (specified by the secant set S(X )).…”
Section: B Analysismentioning
confidence: 99%
“…Since the appearance of a preliminary version of this manuscript, several works have pursued similar goals of learning norm-preserving linear embeddings using optimization methods. The authors of [32] discuss the specialized problem of learning orthonormal linear embeddings, and develop polynomial-time algorithms with provable approximation guarantees. The authors of [33] propose learning embeddings under a Frobenius norm constraint, and propose a different optimization approach with provable guarantees.…”
Section: E Metric Learningmentioning
confidence: 99%
“…This framework deterministically produces a near-isometric linear embedding. Other algorithmic approaches for finding near-isometric linear embeddings are also described in [4,6,19]. …”
Section: Related Workmentioning
confidence: 99%
“…It is also possible to add a constraint on the Frobenius norm of A directly. Note that the approach in [4] solves a convex relaxation of (6), where the rank constraint is replaced by a nuclear norm constraint. That solution works in the N × N space and requires eigendecompositions.…”
Section: Problem Descriptionmentioning
confidence: 99%
“…Hence, the learned metric creates storage as well as computational bottlenecks. As a result, several works [2,3,5,6] consider learning rank-constrained metrics. Unfortunately, enforcing rank constraints on the already computationally challenging MLP (1) makes it non-convex.…”
Section: Introductionmentioning
confidence: 99%