2022
DOI: 10.3390/e24111663
|View full text |Cite
|
Sign up to set email alerts
|

Video Action Recognition Using Motion and Multi-View Excitation with Temporal Aggregation

Abstract: Spatiotemporal and motion feature representations are the key to video action recognition. Typical previous approaches are to utilize 3D CNNs to cope with both spatial and temporal features, but they suffer from huge computations. Other approaches are to utilize (1+2)D CNNs to learn spatial and temporal features in an efficient way, but they neglect the importance of motion representations. To overcome problems with previous approaches, we propose a novel block which makes it possible to alleviate the aforemen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 56 publications
0
1
0
Order By: Relevance
“…The approach simultaneously enhances the adaptive weight by using the concept of graph cutting. In recent years, deep-learning-based human action recognition [ 10 , 11 , 12 , 13 ] has had increased attention in the field of computer vision due to its efficiency in understanding context based on an imitation of our visual cortex. There are 2DCNN-based methods that use a two-stream approach and LSTM networks, and there are 3D CNN-based methods for HAR.…”
Section: Introductionmentioning
confidence: 99%
“…The approach simultaneously enhances the adaptive weight by using the concept of graph cutting. In recent years, deep-learning-based human action recognition [ 10 , 11 , 12 , 13 ] has had increased attention in the field of computer vision due to its efficiency in understanding context based on an imitation of our visual cortex. There are 2DCNN-based methods that use a two-stream approach and LSTM networks, and there are 3D CNN-based methods for HAR.…”
Section: Introductionmentioning
confidence: 99%