2017
DOI: 10.1093/imaiai/iax011
|View full text |Cite
|
Sign up to set email alerts
|

Universality laws for randomized dimension reduction, with applications

Abstract: Dimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. In the Euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. This dimension reduction procedure succeeds when it preserves certain geometric features of the set. The question is how large the embedding dimension must be to ensure that randomized dimension reduction succeeds with high probability.This paper studies a natural … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
73
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(75 citation statements)
references
References 82 publications
2
73
0
Order By: Relevance
“…The reason why SERSP is beaten by Cocktail on TCGA might be that random projection approach may not work well when the embedding dimension (p1 or p2 ) is smaller than the so-called statistical dimension [35]. As in the experiments, we simply set the embedding dimension to be the root of dimension of the variables, a fine tuning of parameter p1 or p2 would improve SERSP in this case.…”
Section: Results and Discussion 331 Performance Comparison Resultsmentioning
confidence: 99%
“…The reason why SERSP is beaten by Cocktail on TCGA might be that random projection approach may not work well when the embedding dimension (p1 or p2 ) is smaller than the so-called statistical dimension [35]. As in the experiments, we simply set the embedding dimension to be the root of dimension of the variables, a fine tuning of parameter p1 or p2 would improve SERSP in this case.…”
Section: Results and Discussion 331 Performance Comparison Resultsmentioning
confidence: 99%
“…Gaussian random matrices may appear as a serious restriction. It is known, however, that Gaussian matrices are a representative of a large class of random matrix models for which many relevant functionals are universal-they concentrate around the same value for a given matrix size [11,23]. 7 Although it is tempting to justify our model choice by universality, in the case of sparse pseudoinverses we must proceed with care.…”
Section: 2mentioning
confidence: 99%
“…Proposition III.2, which shows the validity of the relaxed RSC condition, can be easily extended for design matrices whose rows are not necessarily subgaussian, with a possibly worse sample complexity bound compared to (3). The interested reader is referred to [6], [14], [19] for the details.…”
Section: B Proof Of Corollarymentioning
confidence: 99%