2020
DOI: 10.1109/access.2020.3009136
|View full text |Cite
|
Sign up to set email alerts
|

Learning Dynamic Spatio-Temporal Relations for Human Activity Recognition

Abstract: Human activity, which usually consists of several actions (sub-activities), generally covers interactions among persons and/or objects. In particular, human actions involve certain spatial and temporal relationships, are the components of more complicated activity, and evolve dynamically over time. Therefore, the description of a single human action and the modeling of the evolution of successive human actions are two major issues in human activity recognition. In this paper, we develop a method for human acti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…The two methods [14] of human activity recognition as shown in Fig. 2 have drawbacks of their own like the vision-based human activity recognition is easily impacted by external factors including lighting condition, clothing color, image background, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…The two methods [14] of human activity recognition as shown in Fig. 2 have drawbacks of their own like the vision-based human activity recognition is easily impacted by external factors including lighting condition, clothing color, image background, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…As shown in Tables 3, 5 and 6 [27] 93.3 *koppula et al [24] 80.6 *Tayyub et al [45] 95.2 Sanou et al [36] 86.4 GLIDN (ours) 88.54…”
Section: Are Contextual Views Of Humans and Objects Important?mentioning
confidence: 94%
“…Also, our STIT [1]. NOTE THAT [52], [53], [1] AND [54] HAVE EMPLOYED ADDITIONAL SKELETON OR DEPTH INFORMATION.…”
Section: B Experiments On the Charades Dataset 1) Implementation Detailsmentioning
confidence: 99%
“…Accuracy% Wang et al [52] 81.2 Liu et al [53] 93.3 koppula et al [1] 80.6 Tayyub et al [54] 95.2 Sanou et al [50] 93.6 STIT (ours) 95.93 model can be incorporated with any backbone model rather than I3D without end-to-end training. As a result, our STIT model with Slowfast 16 x 8 surpasses its baseline.…”
Section: Modelmentioning
confidence: 99%