2023
DOI: 10.1007/978-3-031-22216-0_37
|View full text |Cite
|
Sign up to set email alerts
|

Sensor-Based Navigation Using Hierarchical Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…By decomposing tasks, HRL accelerates the learning process, as sub-tasks can be learned independently and in parallel, facilitating faster convergence to optimal policies [46]. This is particularly beneficial for real-time applications where rapid decision-making is crucial.…”
Section: Hierarchical Reinforcement Learning In the Aerial Robot's Ba...mentioning
confidence: 99%
“…By decomposing tasks, HRL accelerates the learning process, as sub-tasks can be learned independently and in parallel, facilitating faster convergence to optimal policies [46]. This is particularly beneficial for real-time applications where rapid decision-making is crucial.…”
Section: Hierarchical Reinforcement Learning In the Aerial Robot's Ba...mentioning
confidence: 99%
“…In cases featuring a hierarchy of hidden states, this problem can be classified as a hierarchical POMDP. Despite the success of hierarchical POMDP models as machine learning methods and their widespread use in artificial intelligence for facilitating adaptive behaviour [1][2][3][4][5][6] , our understanding of how the brain processes uncertain information to solve such challenges remains limited.…”
mentioning
confidence: 99%