2017
DOI: 10.1109/tvcg.2016.2598838
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing the Hidden Activity of Artificial Neural Networks

Abstract: In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
242
0
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 279 publications
(252 citation statements)
references
References 35 publications
8
242
0
2
Order By: Relevance
“…Lighter areas tend to be less typical of their allocated class. Rauber et al [RFFT17] use dimension reduction techniques to project data instances and neurons in multilayer perceptrons and convolutional neural networks to present both the classification results and the relationships between artificial neurons (Figure c). Dendrogramix [BDB15] interactively visualizes clustering results and data patterns from accumulated hierarchical clustering (AHC) by combining a dendrogram and similarity matrix.…”
Section: Pva Pipelinementioning
confidence: 99%
See 1 more Smart Citation
“…Lighter areas tend to be less typical of their allocated class. Rauber et al [RFFT17] use dimension reduction techniques to project data instances and neurons in multilayer perceptrons and convolutional neural networks to present both the classification results and the relationships between artificial neurons (Figure c). Dendrogramix [BDB15] interactively visualizes clustering results and data patterns from accumulated hierarchical clustering (AHC) by combining a dendrogram and similarity matrix.…”
Section: Pva Pipelinementioning
confidence: 99%
“…(b) Slingsby et al [SDW11] present interactive graphics with color coded maps and parallel coordinates to explore uncertainty in area classification results from the Output Area Classification (OAC). (c) Rauber et al [RFFT17] use projections to visualize the similarities between artificial neurons and reveal the inter‐layer evolution of hidden layers after training.…”
Section: Pva Pipelinementioning
confidence: 99%
“…This system aims at helping during the training process, whereas ActiVis focuses its analysis in a post‐training step. Interestingly, Rauber et al [RFFT17] conducted several experiments on different datasets to demonstrate the usability of projections to evaluate how well the models learned to split the data. All these systems utilize 2D projections in conjunction with ground truth, that is, labeled data, whereas our approach is meant to use those projections solely due to the absence of ground truth.…”
Section: Related Workmentioning
confidence: 99%
“…Understanding how such networks learn from examples is notoriously hard, as they usually operate as black boxes [MPG * 14]. Rauber et al [RFFT17] use bundling to help this: Imagine that all neurons of a DNN are points in nD space, with coordinates given by their so-called activations. Such a DNN is typically trained by feeding it a sequence of N examples.…”
Section: A) B)mentioning
confidence: 99%
“…for the reduction of clutter in a graph drawing via node placement [DEGM03]. The method was next extended to handle hierarchical graphs drawn in 2D [Hol06] and 3D [CC07, GBE08]; general graphs [HVW09, DMW07]; spatial trail sets in 2D [CZQ*08, EHP*11, LBA10b] and on curved surfaces [LBA10b]; sequence graphs [HET13] and dynamic graphs and eye tracks [NEH12, HEF*14]; directed graphs [SHH11, Mou15, PHT15]; attributed graphs [TE10, PHT15]; parallel coordinate plots [MM08, PBO*14, PW16]; multidimensional projections [MCMT14, RFFT17]; and 3D vector and tensor fields [YWSC12, BSL*14, EBB*15]. Along the growing interest to apply bundling for many data types, a wide array of bundling techniques has been proposed, based on control structures [Hol06], force‐directed models [HVW09, DMW07, NEH12, EBB*15]; computational geometry techniques [PXY*05, CZQ*08, LBA10b, EHP*11]; image‐based techniques [HET12, BSL*14, Mou15, vdZCT16]; and graph simplification techniques [GHNS11, TE10].…”
Section: Introductionmentioning
confidence: 99%