2021
DOI: 10.3390/app11030975
|View full text |Cite
|
Sign up to set email alerts
|

Intrinsically Motivated Open-Ended Multi-Task Learning Using Transfer Learning to Discover Task Hierarchy

Abstract: In open-ended continuous environments, robots need to learn multiple parameterised control tasks in hierarchical reinforcement learning. We hypothesise that the most complex tasks can be learned more easily by transferring knowledge from simpler tasks, and faster by adapting the complexity of the actions to the task. We propose a task-oriented representation of complex actions, called procedures, to learn online task relationships and unbounded sequences of action primitives to control the different observable… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 48 publications
0
7
0
1
Order By: Relevance
“…Despite its simplicity, and to some extent because of it, this scenario allows us to focus on the main functions of the proposed system and analyse their contribution to the learning process. Furthermore, this type of task is typical of the IMOL literature [44], [58], [25], as well as our previous work on GRAIL system [37], [53]: this allows us to both place this study in continuity with previous ones, and facilitate a comparison with other systems by highlighting advances and differences.…”
Section: A Environment and Taskmentioning
confidence: 94%
See 1 more Smart Citation
“…Despite its simplicity, and to some extent because of it, this scenario allows us to focus on the main functions of the proposed system and analyse their contribution to the learning process. Furthermore, this type of task is typical of the IMOL literature [44], [58], [25], as well as our previous work on GRAIL system [37], [53]: this allows us to both place this study in continuity with previous ones, and facilitate a comparison with other systems by highlighting advances and differences.…”
Section: A Environment and Taskmentioning
confidence: 94%
“…The concept of Intrinsic Motivations (IMs) is borrowed from the biological [9] and psychological literature [10] describing how novel or unexpected "neutral" stimuli, as well as the perception of control, can drive learning processes in the absence of rewards or assigned goals. In the computational field, IMs have been implemented to foster different autonomous processes such as state-space exploration [11], [12], [13], knowledge gathering [14], [15], learning repertoire of skills [16], [17], [18], affordance exploitation [19], [20], goal selection [21], [22], [23], and also boosting imitation learning techniques [24], [25].…”
Section: Introductionmentioning
confidence: 99%
“…Such algorithms have been developed under the names of interactive reinforcement learning or active imitation learning in robotics. In Reference [85], they allowed the system to learn micro and compound actions, while minimizing the number of requests for labeled data by choosing when, what information to ask, and to whom to ask for help. Such principles could inspire a smart home system to continue to adapt its model, while minimizing user intervention and optimizing his intervention, by pointing out the missing key information.…”
Section: Temporal Driftmentioning
confidence: 99%
“…It usually transforms and adjusts the parameters of the network model in the source domain and applies it to the target domain. [41][42][43][44][45] Because there are few time-series samples in the target domain, the real-time data come in batches. It is a very challenging task to retrain the model, and the target data prediction task cannot be completed.…”
Section: Transfer Learning Of Base Modelsmentioning
confidence: 99%
“…Transfer learning 37–40 is the process that uses the knowledge learned from the source domain to deal with the problems in the target domain. It usually transforms and adjusts the parameters of the network model in the source domain and applies it to the target domain 41–45 …”
Section: Base Model Transfermentioning
confidence: 99%