2019
DOI: 10.3389/fnbot.2019.00061
|View full text |Cite
|
Sign up to set email alerts
|

From Rough to Precise: Human-Inspired Phased Target Learning Framework for Redundant Musculoskeletal Systems

Abstract: Redundant muscles in human-like musculoskeletal robots provide additional dimensions to the solution space. Consequently, the computation of muscle excitations remains an open question. Conventional methods like dynamic optimization and reinforcement learning usually have high computational costs or unstable learning processes when applied to a complex musculoskeletal system. Inspired by human learning, we propose a phased target learning framework that provides different targets to learners at varying levels,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 46 publications
0
4
0
Order By: Relevance
“…Depending on whether explicit models of musculoskeletal robots are established during the solution process, these methods can be divided into two categories: modelbased and model-free methods as shown in Table 3. The details are as follows: Model-free methods [87][88][89][90][91][92][93][94] Brain-inspired methods Muscle-synergies-inspired methods [95,96] Cortex-inspired methods [97,98] Hierarchical-mechanisminspired methods [99,100] Cerebellum-inspired methods [101,102] Many model-based control methods for musculoskelet-al robots have been proposed by establishing kinematic and dynamic models of musculoskeletal systems. First, static and dynamic optimizations were used to study musculoskeletal robots.…”
Section: Brain-inspired Motion Control 231 Methods Based On Control T...mentioning
confidence: 99%
See 1 more Smart Citation
“…Depending on whether explicit models of musculoskeletal robots are established during the solution process, these methods can be divided into two categories: modelbased and model-free methods as shown in Table 3. The details are as follows: Model-free methods [87][88][89][90][91][92][93][94] Brain-inspired methods Muscle-synergies-inspired methods [95,96] Cortex-inspired methods [97,98] Hierarchical-mechanisminspired methods [99,100] Cerebellum-inspired methods [101,102] Many model-based control methods for musculoskelet-al robots have been proposed by establishing kinematic and dynamic models of musculoskeletal systems. First, static and dynamic optimizations were used to study musculoskeletal robots.…”
Section: Brain-inspired Motion Control 231 Methods Based On Control T...mentioning
confidence: 99%
“…Combined with the reinforcement learning algorithm, motion learning and multi-task learning were implemented using the RNN, and motion precision and multi-task learning performances were improved compared with traditional methods. Inspired by the hypothesis of motor preparation in the cortex, a motion-learning framework based on an RNN modulated by initial states was proposed [98] . The initial state defines the evolution of the RNN and the motion of the musculoskeletal robotic system.…”
Section: Brain-inspired Control Methodsmentioning
confidence: 99%
“…Another study [30] developed a deep Q networkbased controller for a musculoskeletal arm model, employing a phased target-learning framework and human-inspired noise for stable and efficient exploration of the solution space. Complex reinforcement learning algorithms such as Deep Deterministic Policy Gradient [67], proximal policy optimisation [68], and Soft Actor-Critics [85] have also shown promising control results.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Moreover, they can generate greater forces than purely soft robots, leveraging their rigid skeletons. Furthermore, owing to their high muscle redundancies, failures and breakage of muscles can be compensated by others through different muscle stimulation activations and combinations to achieve the same motion [30].…”
Section: Introductionmentioning
confidence: 99%