2021
DOI: 10.1109/tnnls.2020.2978942
|View full text |Cite
|
Sign up to set email alerts
|

Host–Parasite: Graph LSTM-in-LSTM for Group Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 180 publications
(43 citation statements)
references
References 42 publications
0
43
0
Order By: Relevance
“…Wu et al [75] proposed global motion pattern to represent complex multi-person motions in the sports video. Global motion patterns extracted by an optical flow algorithm are fed into convolutional neural networks and LSTM networks to extract spatial and temporal features for event classification.…”
Section: Hierarchical Temporal Modelingmentioning
confidence: 99%
“…Wu et al [75] proposed global motion pattern to represent complex multi-person motions in the sports video. Global motion patterns extracted by an optical flow algorithm are fed into convolutional neural networks and LSTM networks to extract spatial and temporal features for event classification.…”
Section: Hierarchical Temporal Modelingmentioning
confidence: 99%
“…Hajiaghayi and Vahedi [87] utilized LSTM models prediction and pattern extraction of code failure. In addition, Shu et al [82] proposed a multi-LSTM or LSTM-in-LSTM mechanism for group activity recognition. Denoted as GLIL (Graph LSTM-In-LSTM), they utilized LSTMs for person-level activity recognition, which reside in a global graph LSTM for group-level recognition.…”
Section: Machine Learning On Sequential Datamentioning
confidence: 99%
“…Shu et al [15] proposed a novel Hierarchical Long Short-Term Concurrent Memory (H-LSTCM) to learn the dynamic inter-related representation among a group of persons for hierarchically recognizing human interactions. Shu et al [16] proposed a novel graph LSTM-in-LSTM (GLIL) framework to address the problem of group activity recognition by modeling the person-level actions and the group-level activity simultaneously. Tang et al [17] proposed a novel Coherence Constrained Graph LSTM (CCG-LSTM) for group activity recognition, by exploring the motion-level characteristics of group activity with several coherence constraints.…”
Section: Related Workmentioning
confidence: 99%