2014
DOI: 10.1007/978-3-319-11656-3_5
|View full text |Cite
|
Sign up to set email alerts
|

Low-Dimensional Data Representation in Data Analysis

Abstract: Many Data Analysis tasks deal with data which are presented in high-dimensional spaces, and the 'curse of dimensionality' phenomena is often an obstacle to the use of many methods, including Neural Network methods, for solving these tasks. To avoid these phenomena, various Representation learning algorithms are used, as a first key step in solutions of these tasks, to transform the original high-dimensional data into their lower-dimensional representations so that as much information as possible is preserved a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Asymptotic expansion and uniform large deviation results are obtained for the considered random variable. The problem statement is motivated by manifold learning problems (Roweis and Saul, 2000;Zhang and Zha, 2004;Bernstein and Kuleshov, 2014).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Asymptotic expansion and uniform large deviation results are obtained for the considered random variable. The problem statement is motivated by manifold learning problems (Roweis and Saul, 2000;Zhang and Zha, 2004;Bernstein and Kuleshov, 2014).…”
Section: Resultsmentioning
confidence: 99%
“…Chosen descriptions of the neighborhoods (local descriptions of the DM) are computed. Examples of such descriptions: Saul (2000); the neighborhood U N (X n , k) is used here • An applying of the Principal Component Analysis (PCA) Jolliffe (2002) to the neighborhood U N (X n , ε) results in an p×q orthogonal matrix Q PCA (X n ) whose columns are the PCA principal eigenvectors corresponding to the q largest PCA eigenvalues (Zhang and Zha, 2004;Bernstein and Kuleshov, 2014). These matrices determine q-dimensional linear spaces L PCA (X n ) = Span(Q PCA (X n )) in the p ℝ which, under certain conditions, accurately approximate the tangent spaces…”
Section: Second Step: Neighborhoods Descriptionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, the curse of dimensionality phenomenon is often an obstacle for using ML techniques. To avoid this phenomenon, various universal dimensionality reduction methods Burges (2010); Sorzano et al (2014); Bernstein and Kuleshov (2014); Chernova and Burnaev (2015) and/or specific neuroimaging-oriented feature selection methods Mwangi et al (2014) are used for extracting low-dimensional features from high-dimensional neuroimaging data. After that, ML algorithms are applied to these features.…”
Section: Introductionmentioning
confidence: 99%
“…The typical machine learning pipeline consists of the following steps: data collection, model training, and inference [23], [24]. For resource constrained embedded devices, an important aspect to be considered is an appropriate data processing, which allows for a low-dimensional representation of the data [25]. Low-dimensional data allows for the design of the less computationally complex algorithm.…”
mentioning
confidence: 99%