2021 IEEE 4th International Conference on Soft Robotics (RoboSoft) 2021
DOI: 10.1109/robosoft51838.2021.9479340
|View full text |Cite
|
Sign up to set email alerts
|

Model-Free Reinforcement Learning with Ensemble for a Soft Continuum Robot Arm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(18 citation statements)
references
References 23 publications
0
18
0
Order By: Relevance
“…Model- [1,6] and learningbased [33,12,29] controllers have also proven successful, as well as hybrid policy designs [61,2,26]. Zhu et al [64] consider an origami-like robot with various design configurations that all inform policy optimization, and Morimoto et al [38] employ the soft actor-critic algorithm [23] for reaching tasks. Related, Vikas et al [59] present a modular approach to designing 3Dprinted motor-tendon soft robots that can be readily fabricated, and a model-free algorithm for learning the corresponding control policy.…”
Section: Related Workmentioning
confidence: 99%
“…Model- [1,6] and learningbased [33,12,29] controllers have also proven successful, as well as hybrid policy designs [61,2,26]. Zhu et al [64] consider an origami-like robot with various design configurations that all inform policy optimization, and Morimoto et al [38] employ the soft actor-critic algorithm [23] for reaching tasks. Related, Vikas et al [59] present a modular approach to designing 3Dprinted motor-tendon soft robots that can be readily fabricated, and a model-free algorithm for learning the corresponding control policy.…”
Section: Related Workmentioning
confidence: 99%
“…Model- [1,6] and learningbased [33,12,29] controllers have also proven successful, as well as hybrid policy designs [62,2,26]. Zhu et al [65] consider an origami-like robot with various design configurations that all inform policy optimization, and Morimoto et al [38] employ the soft actor-critic algorithm [23] for reaching tasks. Related, Vikas et al [60] present a modular approach to designing 3Dprinted motor-tendon soft robots that can be readily fabricated, and a model-free algorithm for learning the corresponding Fig.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, data-driven control methods are considered useful for soft robots because of modeling difficulty, and the application of reinforcement learning has been proposed ( Bhagat et al, 2019 ). Data-driven methods are often based on machine learning and include sampling data by actually moving the robot and modeling it using machine learning ( Bruder et al, 2019 ; Buchler et al, 2018 ; George Thuruthel et al, 2017 ; Giorelli et al, 2015 ; Lee et al, 2017 ; Rolf and Steil, 2014 ; Thuruthel et al, 2017 ); and learning directly implemented on the controller by moving the robot and using reinforcement learning ( Chattopadhyay et al, 2018 ; Morimoto et al, 2021 ).…”
Section: Related Workmentioning
confidence: 99%
“…Model-based reinforcement learning is feasible to some extent for continuum robot arms, which can be modeled and are relatively simple in structure and materials used. However, if the robot moves in 3D space or has a large number of actuators, regardless of whether the model is created by humans or acquired by learning using data-driven methods, the differences between the real and simulation robots increase, and the learning is adversely affected ( Morimoto et al, 2021 ). Therefore, for selecting a reinforcement learning method that can be applied to many continuum robot arms, a model-free reinforcement learning method is preferable in which the robot model is neither provided by a user nor the forward model acquired through learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation