Face and Gesture 2011 2011
DOI: 10.1109/fg.2011.5771381
|View full text |Cite
|
Sign up to set email alerts
|

The human motion database: A cognitive and parametric sampling of human motion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 7 publications
0
18
0
Order By: Relevance
“…HDM05 [7] provides a dataset of 1457 human whole-body motion clips with a total run length of around 50 minutes, which have been created by segmenting a continuous motion sequence demonstrated by five different non-professional actors. The Human Motion Database [8] provides five different datasets that have been acquired by using a systematic sampling methodology to select motions to be collected and additionally provides a survey of some existing motion databases in the cited article. In the Edinburgh CGVU Interaction Database [9], human whole-body motion for manipulation and object interaction tasks is captured using magnetic and RGB-D sensors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…HDM05 [7] provides a dataset of 1457 human whole-body motion clips with a total run length of around 50 minutes, which have been created by segmenting a continuous motion sequence demonstrated by five different non-professional actors. The Human Motion Database [8] provides five different datasets that have been acquired by using a systematic sampling methodology to select motions to be collected and additionally provides a survey of some existing motion databases in the cited article. In the Edinburgh CGVU Interaction Database [9], human whole-body motion for manipulation and object interaction tasks is captured using magnetic and RGB-D sensors.…”
Section: Related Workmentioning
confidence: 99%
“…We therefore decided early on that the annotation visualization must be interactive instead of using a rendered video of the motion from a fixed perspective. We used the Web Graphics Library (WebGL) 7 and the three.js framework, 8 which builds on WebGL, to implement an interactive 3D visualizer directly in the user's web browser without the need to install additional software. This allows the user to select an appropriate perspective that help him or her with the annotation process by rotating and zooming the virtual camera freely.…”
Section: A User Interfacementioning
confidence: 99%
“…Both of the nonmotion data sources could be turned into animations. The motion data sources that we use are behavioral markup language actions from SmartBody, a subset of the CMU Motion Capture Database, and The ICS Action Database . Our action recognition source is The Human Motion Database , and our plain text actions are from The American Time Use Survey …”
Section: Experimentationmentioning
confidence: 99%
“…Another suitable route for pHRI is the imitation learning approach, justified by a wide variety of computational algorithms that have been recently developed [see an exhaustive overview in Schaal et al (2003)]. This approach yields to the teacher-student concept and consists in (i) collecting movements of a human agent (i.e., the teacher in the imitation process), gathered by motion capture techniques and clustered in motion databases (Guerra-Filho and Biswas 2011;Kuehne et al 2011;Mandery et al 2015;Wojtusch and Stryk 2015); (ii) classifying motions as a set of primitive actions (i.e., motion primitives) for model abstraction (Kulic et al 2008); (iii) mapping human models into a robot platform, i.e., the student in the imitation process. The retargeting implies the development of robot control algorithms capable of learning motion models from these primitives (Amor et al 2014;Terlemez et al 2014;Mandery et al 2016) in order to emulate human-like movements.…”
Section: Introductionmentioning
confidence: 99%