2022
DOI: 10.1609/aaai.v36i4.20382
|View full text |Cite
|
Sign up to set email alerts
|

Learning Human Driving Behaviors with Sequential Causal Imitation Learning

Abstract: Learning human driving behaviors is an efficient approach for self-driving vehicles. Traditional Imitation Learning (IL) methods assume that the expert demonstrations follow Markov Decision Processes (MDPs). However, in reality, this assumption does not always hold true. Spurious correlation may exist through the paths of historical variables because of the existence of unobserved confounders. Accounting for the latent causal relationships from unobserved variables to outcomes, this paper proposes Sequential C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Progress in statistical causality, such as Granger Causality (GC) and Structural Causal Models (SCM), have formalized causality testing, representation, and analysis with mathematical tools. Recently, ML algorithms have been used in conjunction with statistical causal representations for the causal analysis, such as in multi-domain causal structural learning [38], causal imitation learning [39], causal discovery [40], and causal inference by graphical models [41]- [43]. Schölkopf [10] looked conversely into how causality can be used in ML, especially semi-supervised learning, to enhance robustness by leveraging cross-domain invariant causal mechanisms.…”
Section: Related Workmentioning
confidence: 99%
“…Progress in statistical causality, such as Granger Causality (GC) and Structural Causal Models (SCM), have formalized causality testing, representation, and analysis with mathematical tools. Recently, ML algorithms have been used in conjunction with statistical causal representations for the causal analysis, such as in multi-domain causal structural learning [38], causal imitation learning [39], causal discovery [40], and causal inference by graphical models [41]- [43]. Schölkopf [10] looked conversely into how causality can be used in ML, especially semi-supervised learning, to enhance robustness by leveraging cross-domain invariant causal mechanisms.…”
Section: Related Workmentioning
confidence: 99%
“…The traditional CPL framework for SCM [69][70][71][72][73] first defines the SCM M as {U, V, F, P(u)} to capture the causal relationship between the variables of interest. It divides the endogenous variable V into observed variables O and latent variables L, where O ⊆ V and L = V \ O.…”
Section: Causal Policy Learningmentioning
confidence: 99%
“…CPL performed well on several synthetic datasets, including highway-driving vehicle trajectories (Figure 3), MNIST digits [69,73], and visual navigation tasks [76]. They also exhibit promising applications in autonomous driving and industrial automation.…”
Section: Causal Policy Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Top-down (or bird's eye) views common in the autonomous driving datasets provide information on the dynamics of the agents but not their awareness or decisionmaking process. The latter may be captured implicitly, however, such unobserved variables cannot be effectively learned as recent findings indicate [6]. Likewise, training on synthetic data with similar characteristics would increase sim-real gap, therefore simulations would greatly benefit from integration of more psychologically-plausible elements [7].…”
Section: Introductionmentioning
confidence: 99%