2013 IEEE Workshop on Applications of Computer Vision (WACV) 2013
DOI: 10.1109/wacv.2013.6474999
|View full text |Cite
|
Sign up to set email alerts
|

Berkeley MHAD: A comprehensive Multimodal Human Action Database

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
269
0
5

Year Published

2013
2013
2018
2018

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 411 publications
(275 citation statements)
references
References 25 publications
1
269
0
5
Order By: Relevance
“…Human Skeletal Configurations Figure 2(a) provides an example of the human skeleton structure from the Berkeley MHAD [20]. Following [11], we divide the human skeleton into several topological parts such as chains, single X/Y junctions, and double X/Y junctions.…”
Section: Hierarchical Medial-axis Template Models Formentioning
confidence: 99%
“…Human Skeletal Configurations Figure 2(a) provides an example of the human skeleton structure from the Berkeley MHAD [20]. Following [11], we divide the human skeleton into several topological parts such as chains, single X/Y junctions, and double X/Y junctions.…”
Section: Hierarchical Medial-axis Template Models Formentioning
confidence: 99%
“…Raw inertial sensor data are used extensively, due to their ability to capture instantaneous features of local character and, thus, lead to a rich source of information for action classification. Statistical [23], expressivity [5] and frequency domain parameters [17], on the other hand, although local, convey a summary of an action for different parts of the human body and, thus, they can be time independent. Such parameters usually depend on efficient tracking in video sequences, which is a challenging area of research on its own, attracting the attention of numerous researchers.…”
Section: Related Workmentioning
confidence: 99%
“…Experimental results are also presented on the recently published dataset, Berkely MHAD (Multimodal Human Action Database), described in [23]. The dataset comprises 11 actions performed by 12 subjects, with each subject performing a set of 5 repetitions of each action.…”
Section: Berkeley Mhad Databasementioning
confidence: 99%
See 2 more Smart Citations