2009
DOI: 10.1142/s0218001409007752
|View full text |Cite
|
Sign up to set email alerts
|

A Note on the Locally Linear Embedding Algorithm

Abstract: The paper presents mathematical underpinnings of the locally linear embedding technique for data dimensionality reduction. It is shown that a cogent framework for describing the method is that of optimisation on a Grassmann manifold. The solution delivered by the algorithm is characterised as a constrained minimiser for a problem in which the cost function and all the constraints are defined on such a manifold. The role of the internal gauge symmetry in solving the underlying optimisation problem is illuminate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…In an unsupervised setting we consider routinely used methods available in the Python Scikit-learn package Pedregosa et al (2011), namely Principal Component Analysis (PCA), Singular Value Decomposition (SVD) which is a non-centered version of PCA, Locally Linear Embedding (LLE), and Isomap (IMP). The latter two methods are non-linear generalizations of of PCA (Roweis and Saul (2000), Tenenbaum, Silva and Langford (2000), see also Chojnacki and Brooks (2009); Bengio et al (2003)) which are widely applied in many contexts such as data visualization Elgammal and Lee (2004); Tenenbaum, Silva and Langford (2000), or classification Vlachos et al (2002), among others. Considering the dimensions p ∈ 18, 103 of the datasets described below, Gardes (2018)'s method for dimension reduction could not be included in the comparison for the algorithmic complexity reasons described above.…”
Section: Competitorsmentioning
confidence: 99%
“…In an unsupervised setting we consider routinely used methods available in the Python Scikit-learn package Pedregosa et al (2011), namely Principal Component Analysis (PCA), Singular Value Decomposition (SVD) which is a non-centered version of PCA, Locally Linear Embedding (LLE), and Isomap (IMP). The latter two methods are non-linear generalizations of of PCA (Roweis and Saul (2000), Tenenbaum, Silva and Langford (2000), see also Chojnacki and Brooks (2009); Bengio et al (2003)) which are widely applied in many contexts such as data visualization Elgammal and Lee (2004); Tenenbaum, Silva and Langford (2000), or classification Vlachos et al (2002), among others. Considering the dimensions p ∈ 18, 103 of the datasets described below, Gardes (2018)'s method for dimension reduction could not be included in the comparison for the algorithmic complexity reasons described above.…”
Section: Competitorsmentioning
confidence: 99%