2020
DOI: 10.1007/978-3-030-58517-4_21
|View full text |Cite
|
Sign up to set email alerts
|

MotionSqueeze: Neural Motion Feature Learning for Video Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
90
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 118 publications
(90 citation statements)
references
References 42 publications
0
90
0
Order By: Relevance
“…In the formula, N represents the number of points on the outline. e characteristic description of the body behavior after normalization is normalized [24]. e normalization calculation formula is…”
Section: Body Behavior Characteristic Model Of Sports Trainingmentioning
confidence: 99%
“…In the formula, N represents the number of points on the outline. e characteristic description of the body behavior after normalization is normalized [24]. e normalization calculation formula is…”
Section: Body Behavior Characteristic Model Of Sports Trainingmentioning
confidence: 99%
“…Code is available at (accessed on 10 December 2021). Alternatively, the trainable neural module (MotionSqueeze) for effective motion feature extraction proposed in [ 82 ] could be exploited. Inserted in the middle of any neural network, it learns to establish correspondences across frames and convert them into motion features, which are readily fed to the next downstream layer for better prediction.…”
Section: Recent Advances In Human Motion Analysismentioning
confidence: 99%
“…Early approaches usually rely on hand-crafted features, which detect spatio-temporal interest points and then describe these points with local representations [45,46]. With the tremendous success of deep convolution networks on image-based classification tasks [12,35,38,41], researchers started to explore the application of deep networks on video action recognition task [7,18,29,30,54]. In [37], the famous twostream architecture is devised by applying two 2D CNN architectures separately on visual frames and staked opti-cal flows.…”
Section: Related Workmentioning
confidence: 99%