Motion, Interaction and Games 2019
DOI: 10.1145/3359566.3360071
|View full text |Cite
|
Sign up to set email alerts
|

Low Dimensional Motor Skill Learning Using Coactivation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…The motivation for testing spaces of different dimension ( m ) is perhaps less obvious. It has been shown repeatedly that many actions that appear complex are in fact highly coordinated (e.g., Safonova et al, 2004 ; Ranganath et al, 2019 ). It seems likely that some reduced dimensional space would be adequate for representing the entire successful action family.…”
Section: Methodsmentioning
confidence: 99%
“…The motivation for testing spaces of different dimension ( m ) is perhaps less obvious. It has been shown repeatedly that many actions that appear complex are in fact highly coordinated (e.g., Safonova et al, 2004 ; Ranganath et al, 2019 ). It seems likely that some reduced dimensional space would be adequate for representing the entire successful action family.…”
Section: Methodsmentioning
confidence: 99%
“…Ranganath et al. [RXKZ19] compressed the action space using principal or independent component analysis, reducing the size of the output of the DRL model. Their method is otherwise similar to [PALvdP18].…”
Section: Character Controlmentioning
confidence: 99%
“…They also sampled initial states to favour states from which the current model achieves relatively lower rewards. Ranganath et al [RXKZ19] compressed the action space using principal or independent component analysis, reducing the size of the output of the DRL model. Their method is otherwise similar to [PALvdP18].…”
Section: Other Motionsmentioning
confidence: 99%
“…Recent DRL methods either directly imitate mocap examples [Peng et al 2018a;Won et al 2020], which makes strategy discovery hard if possible; or adopt a de novo approach with no example at all [Heess et al 2015], which often results in extremely unnatural motions for human like characters. Close in spirit to our work is [Ranganath et al 2019], where a low-dimensional PCA space learned from a single mocap trajectory is used as the action space of DeepMimic for tracking-based control. We aim to discover new strategies without tracking, and we use a large set of generic motions to deduce a task-and-strategy-independent natural pose space.…”
Section: Natural Pose Spacementioning
confidence: 99%