2004
DOI: 10.1007/978-3-540-30115-8_16
|View full text |Cite
|
Sign up to set email alerts
|

Model Approximation for HEXQ Hierarchical Reinforcement Learning

Abstract: Abstract. HEXQ is a reinforcement learning algorithm that discovers hierarchical structure automatically. The generated task hierarchy represents the problem at different levels of abstraction. In this paper we extend HEXQ with heuristics that automatically approximate the structure of the task hierarchy. Construction, learning and execution time, as well as storage requirements of a task hierarchy may be significantly reduced and traded off against solution quality.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 4 publications
0
14
0
Order By: Relevance
“…These problems have been widely used in the literature to evaluate the performance of cooperative Q-learning algorithms [12]- [13], [24]- [26].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…These problems have been widely used in the literature to evaluate the performance of cooperative Q-learning algorithms [12]- [13], [24]- [26].…”
Section: Methodsmentioning
confidence: 99%
“…RL can be applied to two types of learning problems [24]. First, single-task problems (e.g., shortest path problem), in which the learner is required to learn a single task.…”
Section: Test Problemsmentioning
confidence: 99%
See 2 more Smart Citations
“…The literature [29] contains two different ways to tackle with UTS in RL: (a) using null state transition and/or generating negative rewards [9,23,6], and (b) manually discarding actions of the action repertoire that could lead to an undesirable state [26], i.e. defining state dependent action repertoires A s ð Þ.…”
Section: Constrained Mdpsmentioning
confidence: 99%