Proposed for Presentation at the AIAA SCITECH 2021 Held January 11-15, 2021. 2020
DOI: 10.2172/1836182
|View full text |Cite
|
Sign up to set email alerts
|

Utilizing Reinforcement Learning to Continuously Improve a Primitive-Based Motion Planner.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Under such a context, continuous feasibility and control liveliness are key concerns ensuring the accomplishment of higher level tasks, such as overtaking another slow vehicle, merging into a busy fast lane, or avoiding an obstacle at high speeds [117]. When selecting mode from a large library of complicated dynamic modes become computationally demanding, learning based approach such as reinforcement learning can be leveraged to efficiently learn the proper mode arbitration decision according to the environment [118], [119].…”
Section: Referencementioning
confidence: 99%
“…Under such a context, continuous feasibility and control liveliness are key concerns ensuring the accomplishment of higher level tasks, such as overtaking another slow vehicle, merging into a busy fast lane, or avoiding an obstacle at high speeds [117]. When selecting mode from a large library of complicated dynamic modes become computationally demanding, learning based approach such as reinforcement learning can be leveraged to efficiently learn the proper mode arbitration decision according to the environment [118], [119].…”
Section: Referencementioning
confidence: 99%
“…The motion planning by MP approach has been extended by Frazzoli and his cofounders, as well as by several others, see, e.g., [5][6][7]12,[40][41][42]. However, some issues remain: first, the MP are specific to a certain system, e.g., a specific vehicle, since they typically depend on parameters.…”
Section: Shortcomingsmentioning
confidence: 99%
“…As an expansion of the library, Ref. [12] propose an exploration phase via reinforcement learning and, then, extracting and adding new trims and maneuvers to the initial library.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, many past works have used Deep RL to solve a smaller part of the planning problem. For example, [26] used Deep RL to estimate reachability, [14] used RL for local planning, and [27][28][29] used RL to learn high-level actions (primitives).…”
Section: Introductionmentioning
confidence: 99%