2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569729
|View full text |Cite
|
Sign up to set email alerts
|

A Belief State Planner for Interactive Merge Maneuvers in Congested Traffic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
65
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 70 publications
(67 citation statements)
references
References 15 publications
0
65
0
2
Order By: Relevance
“…Future work involves using more sophisticated techniques to estimate driver behavior. Other algorithms to learn belief state policies could be considered, as well as a direct comparison with online POMDP solvers [3]. Although our RL agent learned more efficient policies, an online planner may provide greater robustness.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Future work involves using more sophisticated techniques to estimate driver behavior. Other algorithms to learn belief state policies could be considered, as well as a direct comparison with online POMDP solvers [3]. Although our RL agent learned more efficient policies, an online planner may provide greater robustness.…”
Section: Discussionmentioning
confidence: 99%
“…These approaches can scale to large environments and continuous state spaces, but they still suffer from the curse of dimensionality. The computational complexity associated with dense traffic scenarios limits the planning to short time horizons [3], [6] or limit the number of vehicles considered [4], [5].…”
Section: Introductionmentioning
confidence: 99%
“…Classical approaches that subdivide the motion planning into behavior and trajectory planning [2] have the advantage of good computational tractability and modularity [4]. However, through the neglected feedback, the set of solution is reduced.…”
Section: A Related Workmentioning
confidence: 99%
“…Finally, approaches based on Markov Decision Processes use learning data and mostly reinforcement learning [5] or related methods [4] to learn an appropriate control law guiding the ego vehicle through the merging scenario. The approaches of this category holistically solve the problem, inherently accounting for uncertainties.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation