2018
DOI: 10.32614/rj-2018-039
|View full text |Cite
|
Sign up to set email alerts
|

dimRed and coRanking - Unifying Dimensionality Reduction in R

Abstract: Dimensionality reduction" (DR) is a widely used approach to find low dimensional and interpretable representations of data that are natively embedded in high-dimensional spaces. DR can be realized by a plethora of methods with different properties, objectives, and, hence, (dis)advantages. The resulting low-dimensional data embeddings are often difficult to compare with objective criteria. Here, we introduce the dimRed and coRanking packages for the R language. These open source software packages enable users t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
54
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3
1

Relationship

4
6

Authors

Journals

citations
Cited by 53 publications
(54 citation statements)
references
References 30 publications
0
54
0
Order By: Relevance
“…Unsupervised Supervised Software Autoencoder Keras (Chollet, 2015), R: dimRed (Kraemer et al, 2018), h2o (Candel et al, 2015), RcppDL (Kou and Sugomori, 2014) Convolutional Deep Belief Network (CDBN) R & python: TensorFlow (Abadi et al, 2016), Keras (Chollet, 2015), h2o (Candel et al, 2015) Convolutional Neural Network (CNN) R & python: Keras (Chollet, 2015) MXNet (Chen et al, 2015), Tensorflow (Abadi et al, 2016), h2O (Candel et al, 2015), fastai (python) (Howard and Gugger, 2018) Deep Belief Network (DBN) RcppDL (R) (Kou and Sugomori, 2014), python: Caffee (Jia et al, 2014), Theano (Theano Development Team, 2016), Pytorch (Paszke et al, 2017), R & python: TensorFlow (Abadi et al, 2016), h2O (Candel et al, 2015) Deep Boltzmann Machine (DBM) python: boltzmann-machines (Bondarenko, 2017), pydbm (Chimera, 2019)…”
Section: Modelmentioning
confidence: 99%
“…Unsupervised Supervised Software Autoencoder Keras (Chollet, 2015), R: dimRed (Kraemer et al, 2018), h2o (Candel et al, 2015), RcppDL (Kou and Sugomori, 2014) Convolutional Deep Belief Network (CDBN) R & python: TensorFlow (Abadi et al, 2016), Keras (Chollet, 2015), h2o (Candel et al, 2015) Convolutional Neural Network (CNN) R & python: Keras (Chollet, 2015) MXNet (Chen et al, 2015), Tensorflow (Abadi et al, 2016), h2O (Candel et al, 2015), fastai (python) (Howard and Gugger, 2018) Deep Belief Network (DBN) RcppDL (R) (Kou and Sugomori, 2014), python: Caffee (Jia et al, 2014), Theano (Theano Development Team, 2016), Pytorch (Paszke et al, 2017), R & python: TensorFlow (Abadi et al, 2016), h2O (Candel et al, 2015) Deep Boltzmann Machine (DBM) python: boltzmann-machines (Bondarenko, 2017), pydbm (Chimera, 2019)…”
Section: Modelmentioning
confidence: 99%
“…We evaluated these techniques with HemoPI-1 model dataset using R NX (K) quality curves 50 (Fig. S2) using the dimRed R package 51 . Among the different techniques, we selected t-distributed stochastic neighbour embedding (t-SNE) to display HemoPI-1, AMP and HAMP datasets.…”
mentioning
confidence: 99%
“…In particular: nonlinear methods typically require tuning of specific parameters, objective criteria are often lacking, a proper weighting of observations is difficult, and it is harder to interpret the resulting indicators due to their nonlinear nature (Kraemer et al, 2018). The salient feature of PCA is that an inverse projection is well defined and allows for a deeper inspection of the errors, which is not the case for nonlinear methods due to the pre-imaging problem (Mika et al, 1999;Arenas-Garcia et al, 2013).…”
mentioning
confidence: 99%