2018
DOI: 10.1007/s10462-018-9651-1
|View full text |Cite
|
Sign up to set email alerts
|

Video benchmarks of human action datasets: a review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(36 citation statements)
references
References 161 publications
0
36
0
Order By: Relevance
“…Surveys on dataset benchmarks for human action recognition from visual data constitute another field of research tackled in [28,44,61,66,98,181]. They aim to guide researchers in the selection of the most suitable dataset for benchmarking their algorithms.…”
Section: Related Surveysmentioning
confidence: 99%
See 1 more Smart Citation
“…Surveys on dataset benchmarks for human action recognition from visual data constitute another field of research tackled in [28,44,61,66,98,181]. They aim to guide researchers in the selection of the most suitable dataset for benchmarking their algorithms.…”
Section: Related Surveysmentioning
confidence: 99%
“…Authors in [44] propose a novel dataset, called CONVERSE, that represents complex conversational interactions between two individuals via 3D pose. Similarly, authors in [20,46,180,181,232] present a set of comprehensive reviews of the most commonly used RGB-D video-based activity recognition datasets. Relevant information in each category is extracted in order to help researchers to easily choose appropriate data for their needs.…”
Section: Related Surveysmentioning
confidence: 99%
“…A summary of all the above datasets can be found in Table 6. For a more comprehensive review on human action recognition datasets, the reader is referred to [256].…”
Section: Video Understandingmentioning
confidence: 99%
“…YouTube-8M is a dataset of around 8 million videos gathered via the video platform Youtube 3 [1]. The average duration of those videos is 3.75 min in the range of 2 to 8.33 min [26] and for every video metadata according to a predefined vocabulary is given. As other sources describe these metadata as noisy [8] its quality may be assessed as controversial.…”
Section: Related Workmentioning
confidence: 99%