2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00178
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Autonomous Driving by Learning End-to-End Visual Attention

Abstract: Current deep learning based autonomous driving approaches yield impressive results also leading to inproduction deployment in certain controlled scenarios. One of the most popular and fascinating approaches relies on learning vehicle controls directly from data perceived by sensors. This end-to-end learning paradigm can be applied both in classical supervised settings and using reinforcement learning. Nonetheless the main drawback of this approach as also in other learning problems is the lack of explainabilit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(30 citation statements)
references
References 37 publications
0
30
0
Order By: Relevance
“…Therefore, an attention mechanism can be useful in paying more attention to important vehicles and critical parts of the map in the decision-making problem. Attention mechanism can be used with ConvNets to improve explainability and interpretability of end-to-end deep neural networks [10], [11]. A multi-task attentionaware network with a ConvNet backbone was proposed by Ishihara et al [12] to learn a driving policy via conditional imitation learning.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, an attention mechanism can be useful in paying more attention to important vehicles and critical parts of the map in the decision-making problem. Attention mechanism can be used with ConvNets to improve explainability and interpretability of end-to-end deep neural networks [10], [11]. A multi-task attentionaware network with a ConvNet backbone was proposed by Ishihara et al [12] to learn a driving policy via conditional imitation learning.…”
Section: Related Workmentioning
confidence: 99%
“…Kim et al [22] adopted an attention-based method to filter out non-salient image regions to display only regions that causally affect the steering control of a stand-alone vehicle. Similarly, [23] also used an attention model to visualize the perception of deep networks for autonomous driving. Saliency has also been employed to explain AI models for navigation [24], lane change detection [25], and driving behavior reasoning (e.g.…”
Section: Xai For Autonomous Drivingmentioning
confidence: 99%
“…XAI has been receiving growing attention in autonomous driving. Attempts have been made to explain the functions of various AI models for autonomous driving [21,22,23,24,25,26]. Yet, studies on XAI for AI-powered accident anticipation do not catch the accelerating pace of accident anticipation research.…”
Section: Introductionmentioning
confidence: 99%
“…In a study [7] conducted by using an open-source driving simulator CARLA [10], it was reported that the driving performance of the imitation learning agent considerably drops under those conditions such as untrained urban area, weather conditions, and traffic congestion. Secondly, it is important to know how well a network perceives visual inputs for such a safety-critical application of autonomous driving, but only a few studies addressed this issue [8,21,24].…”
Section: Introductionmentioning
confidence: 99%