2012
DOI: 10.1007/s00422-012-0515-5
|View full text |Cite
|
Sign up to set email alerts
|

Bio-inspired adaptive feedback error learning architecture for motor control

Abstract: This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 33 publications
(40 citation statements)
references
References 42 publications
0
40
0
Order By: Relevance
“…ISM not only provides alpha driver for the mechanical tension so that the arm can obtain a balance point from the position defined in trajectory generator, but also drives the muscle spindle through the gamma system (the muscle model was referred in article [16] ), activate the spinal servo system with the ideal position deviation. It is very convenient for the model to ensure the tracking of the new trajectory effectively like Tolu [18] . In fact, this is also valid for the IDM training.…”
Section: The Control Systemmentioning
confidence: 99%
“…ISM not only provides alpha driver for the mechanical tension so that the arm can obtain a balance point from the position defined in trajectory generator, but also drives the muscle spindle through the gamma system (the muscle model was referred in article [16] ), activate the spinal servo system with the ideal position deviation. It is very convenient for the model to ensure the tracking of the new trajectory effectively like Tolu [18] . In fact, this is also valid for the IDM training.…”
Section: The Control Systemmentioning
confidence: 99%
“…The training signals for the PL and LWPR are the output of the feedforward or feedback controller and the total motor command, respectively. current state of the robot, and produce the desired motor command that minimize the output of the feedforward or feedback component (Tolu et al 2012). This, in turn, is dependent on both the sensory delays and the robot dynamics itself, so the internal model is able to provide motor commands that compensate for these delays, therefore improving the performances of stabilization system.…”
Section: Internal Modelmentioning
confidence: 99%
“…LF Controller The Learning Feedback controller overcomes the lack of a precise robot arm dynamic model, ensures the stability of the system and enables the control architecture to achieve a better performance [16]. Further details about the LF controller are provided in [16]. Its gains were tuned to K p = 7.5, K v = 6.4 and K i = 0.22 for the Fable robot.…”
Section: Adaptive Control Architecturementioning
confidence: 99%