2004
DOI: 10.1007/s00422-003-0452-4
|View full text |Cite
|
Sign up to set email alerts
|

A novel model of motor learning capable of developing an optimal movement control law online from scratch

Abstract: A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2005
2005
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…These steps are an essential component of the behavioral paradigm, and therefore, the cost of the corresponding neural effort needs to be included in the optimality criterion. In the field of movement control, where an optimality approach has successfully accounted for vast experimental data (for reviews, see Shimansky et al 2004;Todorov 2004), the consideration of the cost of information processing was critical for understanding even relatively simple motor behaviors such as reaching to grasp and point-to-point movements . Also, the "trailing" pattern of joint coordination typically observed during well-learned arm movements can be fully understood only if neural effort for joint coordination is considered as a primary component of the optimality criterion (Dounskaia & Shimansky 2016;Goble et al 2007).…”
Section: Inclusion Of Neural Effort In Cost Function Can Explain Percmentioning
confidence: 99%
“…These steps are an essential component of the behavioral paradigm, and therefore, the cost of the corresponding neural effort needs to be included in the optimality criterion. In the field of movement control, where an optimality approach has successfully accounted for vast experimental data (for reviews, see Shimansky et al 2004;Todorov 2004), the consideration of the cost of information processing was critical for understanding even relatively simple motor behaviors such as reaching to grasp and point-to-point movements . Also, the "trailing" pattern of joint coordination typically observed during well-learned arm movements can be fully understood only if neural effort for joint coordination is considered as a primary component of the optimality criterion (Dounskaia & Shimansky 2016;Goble et al 2007).…”
Section: Inclusion Of Neural Effort In Cost Function Can Explain Percmentioning
confidence: 99%
“…Desired trajectory planning implies that the neural processing in the mosaic of brain areas involved in online sensorimotor control does little more than play a prerecorded movement tape -which is unlikely [4]. Consequently, we and others [11,[22][23][24][25] have focused on optimal feedback control models, that predict not only average behavior but also the taskspecific sensorimotor contingencies used to generate intelligent adjustments online. Such adjustments enable biological systems to "solve a control problem repeatedly rather than repeat its solution" [26], and thus afford remarkable levels of performance in the presence of noise, delays, internal fluctuations, and unpredictable changes in the environment.…”
Section: Biological Motivation and Relation To Optimal Controlmentioning
confidence: 99%
“…In either case, control signals must ultimately be expressed in the intrinsic, musculoskeletal coordinates of motor execution. Both approaches posit the use of internal models of the limb (Wolpert et al 1995), whether to transform a desired kinematic trajectory into kinetic commands (Kalaska and Crammond 1992; Shadmehr and Mussa-Ivaldi 1994), or for optimal state estimation (Shimansky et al 2004; Todorov 2004; Scott 2012). These internal models are learned during development, but it is also possible to modify them on a shorter time scale, for example, when using a heavy tool that requires altered forces to produce a given movement.…”
Section: Introductionmentioning
confidence: 99%