2016 5th Brazilian Conference on Intelligent Systems (BRACIS) 2016
DOI: 10.1109/bracis.2016.027
|View full text |Cite
|
Sign up to set email alerts
|

Towards Knowledge Transfer in Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 11 publications
0
22
0
Order By: Relevance
“…Therefore, transfer methods especially focused on Deep RL scenario might help to scale it to complex MAS applications, since the adaptation of this technique to MAS is still in its first steps (Castaneda, 2016;Gupta et al, 2017b). Two similar investigations concurrently carried out by different groups evaluated the potential of reusing networks in Deep RL tasks (Glatt et al, 2016;Du et al, 2016). Their results are consistent and show that knowledge reuse can greatly benefit the learning process, but recovering from negative transfer when using Deep RL might be even harder.…”
Section: Transfer In Deep Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, transfer methods especially focused on Deep RL scenario might help to scale it to complex MAS applications, since the adaptation of this technique to MAS is still in its first steps (Castaneda, 2016;Gupta et al, 2017b). Two similar investigations concurrently carried out by different groups evaluated the potential of reusing networks in Deep RL tasks (Glatt et al, 2016;Du et al, 2016). Their results are consistent and show that knowledge reuse can greatly benefit the learning process, but recovering from negative transfer when using Deep RL might be even harder.…”
Section: Transfer In Deep Reinforcement Learningmentioning
confidence: 99%
“…Even when having very different objectives, games often have similarities (such as using the same buttons for moving the character). However, autonomously computing similarities and mappings between games is still an open problem (Glatt et al, 2016;Du et al, 2016).…”
Section: Video Gamesmentioning
confidence: 99%
“…In supervised learning, transferring parameters from a model pre-trained on ImageNet (Russakovsky et al, 2015) has shown to be an effective way of speeding up image classification in a new data set, especially when the source data set is similar to the target data set (Yosinski et al, 2014). In deep RL, the performance of a target agent can be improved by making use of the knowledge learned in one or more similar source agents (Du et al, 2016;Glatt et al, 2016;Parisotto et al, 2016;Rusu et al, 2016;Teh et al, 2017). All works mentioned above perform pre-training and transfer under the same problem settings, that is, pre-training in supervised learning and transfer to supervised learning, or pre-training in RL and transfer to RL.…”
Section: Related Workmentioning
confidence: 99%
“…In the context of machine learning, transfer learning refers to the the situation where what has been learned in one setting, is used in order to improve generalization in an other, usually similar setting. In the context of DRL, models trained in one domain are used as initial models for training the agent in new, similar domains [29]. It has also been used in transferring knowledge from simulated environments, to physical environments [30].…”
Section: Transfer Learningmentioning
confidence: 99%