2019
DOI: 10.1007/s11370-019-00279-6
|View full text |Cite
|
Sign up to set email alerts
|

Adaptation to environmental change using reinforcement learning for robotic salamander

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 20 publications
1
14
0
Order By: Relevance
“…Researchers applied RL to optimise CPGs in different scenarios [19]- [22]. The common factor among them is the formulation of the actor-critic method; yet, they include the CPG controller in the environment -as depicted in Fig.…”
Section: A Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Researchers applied RL to optimise CPGs in different scenarios [19]- [22]. The common factor among them is the formulation of the actor-critic method; yet, they include the CPG controller in the environment -as depicted in Fig.…”
Section: A Related Workmentioning
confidence: 99%
“…According to the authors [22], the motivations for including CPGs in the environment are their intrinsic recurrent nature and the amount of time necessary to train them, since CPGs have been considered Recurrent Neural Networks (RNNs) (which are computationally expensive and slow to train). In [19], [20] during training and inference, the policy outputs a new set of parameters for the CPGs in response to observations from the environment at every time-step. In this case, the observations processed by the actor network -which usually represent the feedbackare responsible for producing a meaningful set of CPGparameters for the current state.…”
Section: A Related Workmentioning
confidence: 99%
See 3 more Smart Citations