2021
DOI: 10.1002/ail2.56
|View full text |Cite
|
Sign up to set email alerts
|

Toward explainable and advisable model for self‐driving cars

Abstract: Humans learn to drive through both practice and theory, for example, by studying the rules, while most self‐driving systems are limited to the former. Being able to incorporate human knowledge of typical causal driving behavior should benefit autonomous systems. We propose a new approach that learns vehicle control with the help of human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (eg, “I see a pedestrian crossing, so … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…At Level 7, the user reciprocates by advising the program in some way (Goyal et al, 2019;Yeh et al, 2019;Kim et al, 2021). A number of XAI researchers have proposed systems that would fall at this level.…”
Section: About Levels (Exploration) and (Interactive Adaptation)mentioning
confidence: 99%
“…At Level 7, the user reciprocates by advising the program in some way (Goyal et al, 2019;Yeh et al, 2019;Kim et al, 2021). A number of XAI researchers have proposed systems that would fall at this level.…”
Section: About Levels (Exploration) and (Interactive Adaptation)mentioning
confidence: 99%
“…Another way would be by analyzing the controller itself (introspective explanations). A related work proposed in [52] tries to generate introspective explanations by detecting the regions of the image that causally influence the network output. In this case, the images are obtained from the environment, not from the driver.…”
Section: Xai and Automotive Environmentmentioning
confidence: 99%
“…Second, we propose how these SCMs can be integrated into a 3D detection, multi-object tracking, and motion forecasting network, similar to [10], enabling end-to-end training and interpretability. Other relevant works also apply interpretability techniques to autonomous driving, but most focus on interpretable planners and controllers [1,17], interpretable representations [3,17], post-hoc explanations [4], and advising the planner via natural language [5,6]. In contrast, our proposed architecture explains decisions of a tracker by providing interpretable SCMs as a proxy for its network's reasoning procedure.…”
Section: Introductionmentioning
confidence: 99%