2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017
DOI: 10.1109/iros.2017.8206123
|View full text |Cite
|
Sign up to set email alerts
|

Model-free control for soft manipulators based on reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 60 publications
(31 citation statements)
references
References 12 publications
0
31
0
Order By: Relevance
“…Control policies obtained using RL techniques are also more robust to external disturbances making them ideal for the BR 2 manipulator, whose workspace is dependent on external loads. RL implementations in the context of soft robots is relatively new and has focused on traditional Q-learning [30], [31]. Both implementations work on controlling planar 2D motions, and have a relatively small state-action space and use fixed steps during transitions.…”
Section: Introductionmentioning
confidence: 99%
“…Control policies obtained using RL techniques are also more robust to external disturbances making them ideal for the BR 2 manipulator, whose workspace is dependent on external loads. RL implementations in the context of soft robots is relatively new and has focused on traditional Q-learning [30], [31]. Both implementations work on controlling planar 2D motions, and have a relatively small state-action space and use fixed steps during transitions.…”
Section: Introductionmentioning
confidence: 99%
“…At present, the application of reinforcement learning in controlling soft arms is still in the early stage. In our previous work (You et al, 2017), control of the tip of the soft arm is implemented in a 2D plane based on Q -learning and its robustness to destruction of actuators is demonstrated. Satheeshbabu et al (2019) used a Deep Q -Network (DQN) method to implement open-loop positional control of the tip of a soft arm in 3D spaces.…”
Section: Related Workmentioning
confidence: 99%
“…One of the problems to use reinforcement learning to control soft-bodied arms is that the data is hard to obtain. For example, You et al (2017) collected data on a physical platform, which costs a lot. In order to expedite training, Satheeshbabu et al (2019) used a mathematical model presented in Uppalapati and Krishnan (2021) to generate virtual training data.…”
Section: Related Workmentioning
confidence: 99%
“…ese trunks consist of multiple segments with dual actuation, i.e., electric motor and pneumatic [21][22][23][24][25][26][27][28]. In the current research scenario, the third-generation continuum robot, which is known as the bionic handling assistant (BHA) model developed by Festo [29][30][31], has entered the production environment. is is an advanced prototype that is constructed using the concepts of lightweight design and possesses the capability to operate with increased flexibility.…”
Section: Structurementioning
confidence: 99%