2017 12th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2017) 2017
DOI: 10.1109/fg.2017.67
|View full text |Cite
|
Sign up to set email alerts
|

The DAily Home LIfe Activity Dataset: A High Semantic Activity Dataset for Online Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 39 publications
0
19
0
Order By: Relevance
“…To evaluate our proposed pipeline, we tested it with two different datasets; DAHLIA [9] and GAADRD [10]. The details and results of these datasets are explained in next subsections.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…To evaluate our proposed pipeline, we tested it with two different datasets; DAHLIA [9] and GAADRD [10]. The details and results of these datasets are explained in next subsections.…”
Section: Resultsmentioning
confidence: 99%
“…Recently, the DAily Home LIfe Activity Dataset (DAHLIA) was published [9], it is the biggest public dataset for detection of daily-living activities. Many algorithms were applied to this dataset as baseline; Online Efficient Linear Search (ELS) [19] utilized the sliding window approach along with features from 3D skeletons in each frame forming a new feature called "gesturelets", which is used to form a codebook then train SVM classifier.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…UCF101 [36], HMDB [22] and Kinetics [19] were widely used for recog-nizing actions in video clips [40,29,45,8,35,44,7,28,26,38,18,41]; THUMOS [17], ActivityNet [4] and AVA [13] were introduced for temporal/spatial-temporal action localization [33,48,27,37,52,53,3,5,24]. Recently, significant attention has been drawn to model human-human [13] and human-object interactions in daily actions [31,34,42]. In contrast to these datasets that were designed to evaluate motion and appearance modeling, or human-object interactions, our Agent-in-Place Action (APA) dataset is the first one that focuses on actions that are defined with respect to scene layouts, including interaction with places and moving directions.…”
Section: Related Workmentioning
confidence: 99%