2020 IEEE International Conference on Big Data (Big Data) 2020
DOI: 10.1109/bigdata50022.2020.9378049
|View full text |Cite
|
Sign up to set email alerts
|

Extendable and invertible manifold learning with geometry regularized autoencoders

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 35 publications
0
9
0
Order By: Relevance
“…Many machine learning methods depend on some measure of pairwise similarity (which is usually unsupervised) including dimensionality reduction methods [17], [18], [19], [20], [21], [22], [23], spectral clustering [24], and any method involving the kernel trick such as SVM [25] and kernel PCA [26]. Random forest proximities can be used to extend many of these problems to a supervised setting and have been used for data visualization [27], [28], [29], [30], [31], outlier detection [30], [32], [33], [34], and data imputation [35], [36], [37], [38].…”
Section: Introductionmentioning
confidence: 99%
“…Many machine learning methods depend on some measure of pairwise similarity (which is usually unsupervised) including dimensionality reduction methods [17], [18], [19], [20], [21], [22], [23], spectral clustering [24], and any method involving the kernel trick such as SVM [25] and kernel PCA [26]. Random forest proximities can be used to extend many of these problems to a supervised setting and have been used for data visualization [27], [28], [29], [30], [31], outlier detection [30], [32], [33], [34], and data imputation [35], [36], [37], [38].…”
Section: Introductionmentioning
confidence: 99%
“…Some parameter variants of the above methods are proposed to improve generability and explainability. Topological autoencoders (TAE) [36] and Geometry Regularized AutoEncoders (GRAE) [11], for example, train autoencoders directly with local distance constraints in the input space. Ivis [49] propose a triplet loss function with distance as a constraint to train the neural network.…”
Section: Dimension Reduction and Data Visualizationmentioning
confidence: 99%
“…Ding et al (2018;scvis) and Graving and Couzin (2020;VAE-SNE) describe VAE-derived dimensionality reduction algorithms based on the ELBO objective. Duque, Morin, Wolf, and Moon (2020; geometryregularized autoencoders) regularize an autoencoder with the PHATE (potential of heat-diffusion for affinity-based trajectory embedding) embedding algorithm (Moon et al, 2019). Szubert et al (2019;ivis) and Robinson (2020; differential embedding networks) make use of Siamese neural network architectures with structure-preserving loss functions to learn embeddings.…”
Section: Related Workmentioning
confidence: 99%
“…In principle, any embedding technique can be implemented parametrically by training a parametric model (e.g., a neural network) to predict embeddings from the original high-dimensional data (as in Duque et al, 2020). However, such a parametric embedding is limited in comparison to directly optimizing the algorithm's loss function.…”
Section: Comparisons With Indirect Parametric Embeddingsmentioning
confidence: 99%