2017
DOI: 10.1177/1473871617713337
|View full text |Cite
|
Sign up to set email alerts
|

Projections as visual aids for classification system design

Abstract: Dimensionality reduction is a compelling alternative for high-dimensional data visualization. This method provides insight into high-dimensional feature spaces by mapping relationships between observations (high-dimensional vectors) to low (two or three) dimensional spaces. These low-dimensional representations support tasks such as outlier and group detection based on direct visualization. Supervised learning, a subfield of machine learning, is also concerned with observations. A key task in supervised learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
62
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(63 citation statements)
references
References 51 publications
1
62
0
Order By: Relevance
“…Simply put, we do not know what happens in the blank space between the scatterplot points. In particular, the decision boundaries ∂DZ(c) of the classifier are not explicitly visualized, leaving the user to guessing their actual position [4].…”
Section: Decision Boundary Mapsmentioning
confidence: 99%
See 1 more Smart Citation
“…Simply put, we do not know what happens in the blank space between the scatterplot points. In particular, the decision boundaries ∂DZ(c) of the classifier are not explicitly visualized, leaving the user to guessing their actual position [4].…”
Section: Decision Boundary Mapsmentioning
confidence: 99%
“…As such methods can become very complex, as in the case of deep learning (DL) methods, practitioners and users have challenges in understanding, customizing and trusting them [1,2]. To alleviate this, recent work has focused on visually explaining how ML techniques learn and take their decisions [3][4][5][6].…”
Section: Introductionmentioning
confidence: 99%
“…To do this, one way is to engineer discriminating features using the raw 17 ones present in the data. Projections can help us in determining how good the engineered features are: if we find a projection where same-class points are well separated into clusters then the features that the projection has used as input are a good start for building a good classifier [79]. Figure 6 shows our method applied on this dataset, with the right image showing the t-SNE projection.…”
Section: Segmentation Datasetmentioning
confidence: 99%
“…Finally, plotting the actual misclassi ed points atop of such a projection allows us to reason about which data attributes these have and how these may have caused problems to the classi er. These, and other, scenarios have been recently examined in recent literature [114][115][116]. In particular, Rauber et al [115] show that projections can be used as good predictors for the ease of constructing a good classi er from a given training set.…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…These, and other, scenarios have been recently examined in recent literature [114][115][116]. In particular, Rauber et al [115] show that projections can be used as good predictors for the ease of constructing a good classi er from a given training set. The idea is further developed in [13], who show how projections can be used to improve an existing classi er by semi-supervised training.…”
Section: Dimensionality Reductionmentioning
confidence: 99%