2022
DOI: 10.1007/978-3-031-19833-5_15
|View full text |Cite
|
Sign up to set email alerts
|

COMPOSER: Compositional Reasoning of Group Activity in Videos with Keypoint-Only Modality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 39 publications
0
11
0
Order By: Relevance
“…In this research, a novel approach to group activity analysis is introduced, leveraging a fusion of the Yolov5 [12] and ViT[13] models to architect networks dedicated to individual and group activity analyses.The experimental outcomes showcase the efficacy of this innovative approach in real-world systems, offering a straightforward yet highly efficient solution within the realm of group activity analysis.Moving forward, future research endeavors will concentrate on enhancing the model's recognition accuracy while concurrently streamlining its architecture for lighter computational load.This endeavor aims to enable real-time deployment in practical scenarios, extending the applicability of this technology across diverse domains.Such advancements promise a broader spectrum of practical applications, fostering the widespread adoption of group activity analysis technology across multifarious industries.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this research, a novel approach to group activity analysis is introduced, leveraging a fusion of the Yolov5 [12] and ViT[13] models to architect networks dedicated to individual and group activity analyses.The experimental outcomes showcase the efficacy of this innovative approach in real-world systems, offering a straightforward yet highly efficient solution within the realm of group activity analysis.Moving forward, future research endeavors will concentrate on enhancing the model's recognition accuracy while concurrently streamlining its architecture for lighter computational load.This endeavor aims to enable real-time deployment in practical scenarios, extending the applicability of this technology across diverse domains.Such advancements promise a broader spectrum of practical applications, fostering the widespread adoption of group activity analysis technology across multifarious industries.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, some studies have amalgamated skeletal information with graph or semantic models for analyzing group behavior, as observed in works by Zhang et al [11] . and Zhou et al [12] . Overall, as the field of group activity analysis evolves, researchers have explored an array of methodologies and model structures to tackle diverse group activity recognition challenges.In the course of continuous development, only a handful of practical techniques have emerged, largely due to the complexity of proposed analytical methods that fail to align with practical application demands.…”
Section: Introductionmentioning
confidence: 92%
“…Recent deep learning-based GAR methods [8,[10][11][12][16][17][18][19][20] recognize the activity of a group in three steps. First, they encode the features of the states of each individual action in each frame in a video of a specific activity to obtain a feature map [21].…”
Section: Group Activity Recognitionmentioning
confidence: 99%
“…In the test set, we denote the number of samples in the k-th class as p k , k = 1, 2, • • • , K, where K is the number of classes, and the number of correctly recognized samples in the k-th class is q k . The calculations of MCA and MPCA are formulated as Equations ( 18) and (19):…”
Section: Experiments Settings 411 Datasetsmentioning
confidence: 99%
“…These include spatially-detailed information, such as the person poses and object locations, and object categorical information. Such information can be readily and reliably extracted by modern deep learning algorithms and have been reported to enhance the accuracy of action recognition [6]. For example, [7] exploited the positional relations between instances and object categories, and achieved accurate scene-level object-centered action recognition.…”
Section: Introductionmentioning
confidence: 99%