2021
DOI: 10.1371/journal.pcbi.1008481
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical motor adaptations negotiate failures during force field learning

Abstract: Humans have the amazing ability to learn the dynamics of the body and environment to develop motor skills. Traditional motor studies using arm reaching paradigms have viewed this ability as the process of ‘internal model adaptation’. However, the behaviors have not been fully explored in the case when reaches fail to attain the intended target. Here we examined human reaching under two force fields types; one that induces failures (i.e., target errors), and the other that does not. Our results show the presenc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 70 publications
1
3
0
Order By: Relevance
“…Despite the limitations, the simple linearized OFC models were crucial in determining the relationship between three properties of the task demands, its impact on coarticulation, and feedback control. Our approach is consistent with how ideal actor/estimator models have been predominantly employed as normative models in formalizing an initial hypothesis, and simulating these to generate testable predictions (29,33,(35)(36)(37)(49)(50)(51)(52)(53)(54)(55). Such normative models should be distinguished from detailed biomechanical models that can trade-off interpretability and predictability for achieving better quantitative fits.…”
Section: Limitationssupporting
confidence: 59%
“…Despite the limitations, the simple linearized OFC models were crucial in determining the relationship between three properties of the task demands, its impact on coarticulation, and feedback control. Our approach is consistent with how ideal actor/estimator models have been predominantly employed as normative models in formalizing an initial hypothesis, and simulating these to generate testable predictions (29,33,(35)(36)(37)(49)(50)(51)(52)(53)(54)(55). Such normative models should be distinguished from detailed biomechanical models that can trade-off interpretability and predictability for achieving better quantitative fits.…”
Section: Limitationssupporting
confidence: 59%
“…Mistry and colleagues ( Mistry et al, 2013 ) made similar assumptions in the context of Linear-Quadratic-Gaussian (LQG) control ( Todorov and Jordan, 2002 ), with an estimated plant dynamics that differed from the true plant dynamics including the force field. Recently, Ikegami and colleagues ( Ikegami et al, 2021 ) used the same approach to demonstrate that both target failure and altered hand path may interact to drive adaptation hierarchically. In these models, the level of adaptation depended on how much the force field was compensated during movement by the approximate internal model, which simply takes the form of a function used in the controller.…”
Section: Computational Models Of Human Reaching Adaptationmentioning
confidence: 99%
“…Hierarchical reinforcement learning was one of the strong theoretical fields with this computational objective (Wiering and Schmidhuber 1997; Parr et al 1997), and Samejima and colleagues combined hierarchical reinforcement learning with multiple internal models (Samejima et al 2003(Samejima et al , 2006Kawato and Samejima 2007). In the neuroscience of motor control, hierarchical models (Kawato et al 1987) and uniform and flat models (Todorov and Jordan 2002) were proposed, and there have been oscillations back (Scott 2004) and forth (Franklin et al 2008;Osu et al 2015;Babič et al 2016;Ikegami et al 2021). In robotics and artificial intelligence, hierarchical reinforcement learning has been explored for almost 20 years (Morimoto and Doya 2001), and recently it has regained popularity (Sugimoto et al 2012a;Merel et al 2019).…”
Section: Introductionmentioning
confidence: 99%