2021
DOI: 10.1109/tiv.2020.3000323
|View full text |Cite
|
Sign up to set email alerts
|

Inverse Learning for Data-Driven Calibration of Model-Based Statistical Path Planning

Abstract: This paper presents a method for inverse learning of a control objective defined in terms of requirements and their joint probability distribution from data. The probability distribution characterizes tolerated deviations from the deterministic requirements and is learned using maximum likelihood estimation from data. Further, this paper introduces both parametrized requirements for motion planning in autonomous driving applications and methods for the estimation of their parameters from driving data. Both the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 50 publications
0
10
0
Order By: Relevance
“…where w LQR for each driver. The individuality of different drivers is highlighted, e.g., in [8], [22].…”
Section: Single-vehicle Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…where w LQR for each driver. The individuality of different drivers is highlighted, e.g., in [8], [22].…”
Section: Single-vehicle Estimationmentioning
confidence: 99%
“…Proof: Without loss of generality, let the priority list l in (21) be sorted in ascending order according to the vehicles' indices. Then, the index sets in (22) take the form…”
Section: Appendix: Proof Of Lemmamentioning
confidence: 99%
“…Then we can utilize (19) to update the belief of each interacting vehicle's leader or follower role, P(σ k = l|ξ k t ), l ∈ L = {leader, follower}. The MPC-based control strategy presented in ( 6) can be reformulated as…”
Section: B Control Strategy For Multi-vehicle Interactionsmentioning
confidence: 99%
“…The decision making algorithm proceeds as follows: At the sampling time t, the ego vehicle measures the current states of each pairwise interaction and adds them together with the previous control input to the observation vectors ξ k t . The belief about each vehicle's leader or follower role is updated according to (19) based on ξ k t . Then, the MPC-based control strategy ( 22) is utilized to obtain the optimal trajectory (γ 0 ) * by searching through all trajectories introduced in Section II-D, and the ego vehicle applies the first control input (u 0 t ) * over one sampling period to update its states.…”
Section: B Control Strategy For Multi-vehicle Interactionsmentioning
confidence: 99%
See 1 more Smart Citation