2020
DOI: 10.1016/j.gmod.2020.101060
|View full text |Cite
|
Sign up to set email alerts
|

Robust dimensionality reduction for data visualization with deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…When developing machine learning models (such as ANNs), at a certain point, more features or dimensions can decrease model's accuracy (a phenomenon called ‘the curse of dimensionality’) ( Han et al, 2020 ). In this case, a dimensionality reduction strategy (such as the utilization of PCA) may be employed in order to accelerate the training time of the model, reduce its complexity and avoid overfitting ( Becker et al, 2020 ). In general, PCA projects the original data onto a new set of few variables called principal components (PCs).…”
Section: Methodsmentioning
confidence: 99%
“…When developing machine learning models (such as ANNs), at a certain point, more features or dimensions can decrease model's accuracy (a phenomenon called ‘the curse of dimensionality’) ( Han et al, 2020 ). In this case, a dimensionality reduction strategy (such as the utilization of PCA) may be employed in order to accelerate the training time of the model, reduce its complexity and avoid overfitting ( Becker et al, 2020 ). In general, PCA projects the original data onto a new set of few variables called principal components (PCs).…”
Section: Methodsmentioning
confidence: 99%
“…The evident example of this approach is the use of kernel functions in SVM classifiers, aiming to improve the separability property to allow efficient localization of the hyperplane [14]. Another example is the latent representation yielded on the hidden layers of ANNs [15,16]. The goal is to increase the compactness of the class samples by abstracting the input data from its specifics and details.…”
Section: Introductionmentioning
confidence: 99%
“…The ReNDA algorithm [15] is a very recent neural-based approach that uses two networks, improving on earlier work from the same authors. One network is used to implement a nonlinear generalization of Fisher's Linear Discriminant Analysis, using a method called GerDA; the other network is an Autoencoder used as a regularizer.…”
Section: 2mentioning
confidence: 99%
“…Typically, autoencoders produce results comparable to PCA. The ReNDA algorithm [15] uses two networks, improving on…”
Section: 2mentioning
confidence: 99%
See 1 more Smart Citation