2012
DOI: 10.1007/978-3-642-29946-9_23
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning via Multiple Inter-task Mappings

Abstract: Abstract. In this paper we investigate using multiple mappings for transfer learning in reinforcement learning tasks. We propose two different transfer learning algorithms that are able to manipulate multiple inter-task mappings for both model-learning and model-free reinforcement learning algorithms. Both algorithms incorporate mechanisms to select the appropriate mappings, helping to avoid the phenomenon of negative transfer. The proposed algorithms are evaluated in the Mountain Car and Keepaway domains. Exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 10 publications
0
22
0
Order By: Relevance
“…Furthermore, keeper-team behaviors evolved and transferred to increasingly complex keep-away tasks were found to be comparable in average task performance to keeper-team policies derived with RL methods and policy transfer (Stone et al, 2006b;Whiteson and Stone, 2006). Verbancsics and Stanley (2010) also used HyperNEAT to demonstrate successful transfer of collective behaviors between Knight's Joust, which is a multiagent predator-prey task variant (Taylor et al, 2010), and keep-away soccer tasks. The efficacy of this policy transfer method was supported by improved task performance on target tasks given further behavior evolution.…”
Section: Evolutionary Policy Transfermentioning
confidence: 99%
See 4 more Smart Citations
“…Furthermore, keeper-team behaviors evolved and transferred to increasingly complex keep-away tasks were found to be comparable in average task performance to keeper-team policies derived with RL methods and policy transfer (Stone et al, 2006b;Whiteson and Stone, 2006). Verbancsics and Stanley (2010) also used HyperNEAT to demonstrate successful transfer of collective behaviors between Knight's Joust, which is a multiagent predator-prey task variant (Taylor et al, 2010), and keep-away soccer tasks. The efficacy of this policy transfer method was supported by improved task performance on target tasks given further behavior evolution.…”
Section: Evolutionary Policy Transfermentioning
confidence: 99%
“…These variants are a fitness function (objective-based search), behavioral diversity maintenance (novelty search), genotypic diversity maintenance (Section 3), and both genotypic and behavioral diversity maintenance hybridized with objective-based search. RoboCup keep-away was selected as it is a well-established multiagent (robot) experimental platform (Taylor et al, 2010). This study thus evaluates various evolutionary search methods coupled with policy transfer as a means to increase the quality of evolved collective (keep-away) behaviors.…”
Section: Hybridized Novelty and Objective-based Evolutionary Search Umentioning
confidence: 99%
See 3 more Smart Citations