2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461053
|View full text |Cite
|
Sign up to set email alerts
|

VisualBackProp: Efficient Visualization of CNNs for Autonomous Driving

Abstract: This paper proposes a new method, that we call VisualBackProp, for visualizing which sets of pixels of the input image contribute most to the predictions made by the convolutional neural network (CNN). The method heavily hinges on exploring the intuition that the feature maps contain less and less irrelevant information to the prediction decision when moving deeper into the network. The technique we propose was developed as a debugging tool for CNN-based systems for steering self-driving cars and is therefore … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
124
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 125 publications
(125 citation statements)
references
References 25 publications
0
124
0
1
Order By: Relevance
“…We visualize raw input images with salient objects marked by a green circle, e.g., a bus pulling off, which is mentioned by an advice input (1st row). The provided advice (1)(2)(3)(4)(5)(6) is provided at the bottom of the figure. We visualize attention heat maps from our trained model but with a synthetic token <none> (i.e., without advice, 2nd row).…”
Section: Attention Differencementioning
confidence: 99%
See 1 more Smart Citation
“…We visualize raw input images with salient objects marked by a green circle, e.g., a bus pulling off, which is mentioned by an advice input (1st row). The provided advice (1)(2)(3)(4)(5)(6) is provided at the bottom of the figure. We visualize attention heat maps from our trained model but with a synthetic token <none> (i.e., without advice, 2nd row).…”
Section: Attention Differencementioning
confidence: 99%
“…The recent achievements [3,27] suggest that deep neural models can be applied to vehicle controls in an end-to-end manner by effectively learning latent representations from data. Explainability of these deep controllers has increasingly been explored via a visual attention mechanism [8], a deconvolution-style approach [2], and a natural language model [9]. Such explainable models will be an important element of human-vehicle interaction because they allow people and vehicles to understand and anticipate each other's actions, hence to cooperate effectively.…”
Section: Introductionmentioning
confidence: 99%
“…An autonomy of 98% was reached on a 20‐km drive from Holmdel to Atlantic Highlands, NJ. Through training, PilotNet learns how the steering commands are computed by a human driver (Bojarski et al, ). The focus is on determining which elements in the input traffic image have the most influence on the network's steering decision.…”
Section: Motion Controllers For Ai‐based Self‐driving Carsmentioning
confidence: 99%
“…they use visual cues such as hairs and rulers as indicators of the lesion category. To demonstrate this problem we employ the visualization technique, called Vi-sualBackProp [9], that highlights the part of the image that the network focuses on when forming its prediction. Figure 2 shows the results obtained for the traditionally-trained deep model (without performing data purification or augmentation) on the raw data.…”
Section: Data Purification Problemmentioning
confidence: 99%