2020
DOI: 10.1177/1473871620909485
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning multidimensional projections

Abstract: Dimensionality reduction methods, also known as projections, are often used to explore multidimensional data in machine learning, data science, and information visualization. However, several such methods, such as the well-known t-distributed stochastic neighbor embedding and its variants, are computationally expensive for large datasets, suffer from stability problems, and cannot directly handle out-of-sample data. We propose a learning approach to construct any such projections. We train a deep neural networ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
114
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 60 publications
(116 citation statements)
references
References 33 publications
1
114
0
1
Order By: Relevance
“…This limitation is well known and discussed in several works [21,[47][48][49]. The same limitations are shared by the inverse projection P −1 [12,20,50].…”
Section: Dense Map Filteringmentioning
confidence: 88%
See 4 more Smart Citations
“…This limitation is well known and discussed in several works [21,[47][48][49]. The same limitations are shared by the inverse projection P −1 [12,20,50].…”
Section: Dense Map Filteringmentioning
confidence: 88%
“…To compute decision maps, Reference [9] used t-Stochastic Neighbor Embedding (t-SNE) [18] and Local Affine Multidimensional Projections (LAMP) [19] to implement P and Inverse LAMP (iLAMP) [20] for P −1 , respectively. More recently, Espadoto et al proposed a more accurate and faster to compute, implementation for P −1 , based on deep learning [12] (NNinv). It is worth noting that both NNinv and iLAMP fit P −1 from data.…”
Section: Decision Boundary Mapsmentioning
confidence: 99%
See 3 more Smart Citations