2010
DOI: 10.1007/978-3-642-15822-3_20
|View full text |Cite
|
Sign up to set email alerts
|

Action Classification in Soccer Videos with Long Short-Term Memory Recurrent Neural Networks

Abstract: Abstract. In this paper, we propose a novel approach for action classification in soccer videos using a recurrent neural network scheme. Thereby, we extract from each video action at each timestep a set of features which describe both the visual content (by the mean of a BoW approach) and the dominant motion (with a key point based approach). A Long Short-Term Memory-based Recurrent Neural Network is then trained to classify each video sequence considering the temporal evolution of the features for each timest… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0
5

Year Published

2011
2011
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 98 publications
(66 citation statements)
references
References 7 publications
0
61
0
5
Order By: Relevance
“…The second key idea in LSTM is the use of multiplicative gates to control the access to the CEC. We have shown in our previous work [1] that LSTM are efficient to label sequences of descriptors corresponding to hand-crafted features.…”
Section: Sequence Labelling Considering the Temporal Evolution Of Leamentioning
confidence: 99%
See 1 more Smart Citation
“…The second key idea in LSTM is the use of multiplicative gates to control the access to the CEC. We have shown in our previous work [1] that LSTM are efficient to label sequences of descriptors corresponding to hand-crafted features.…”
Section: Sequence Labelling Considering the Temporal Evolution Of Leamentioning
confidence: 99%
“…Thus, even if the learned features, taken individually, contains temporal information, their evolution over time is completely ignored. Though, we have shown in our previous work [1] that such information does help discriminating between actions, and is particularly usable by a category of learning machines, adapted to sequential data, namely Long Short-Term Memory recurrent neural networks (LSTM) [6].…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Hard attention [Mnih et al, 2014;Ba et al, 2014] samples attention location at each time stamp, which causes the system not differentiable. In contrast, soft attention [Bahdanau et al, 2014;Sharma et al, 2015] aims to learn a set of weights corresponding to each region, the model is differentiable and can be trained end-to-end using standard backpropagation. Therefore, we adopt the soft attention model in our work.…”
Section: Relative Attention Networkmentioning
confidence: 99%
“…We test our network on top of both RGB frames and optical flows. The optical flow is computed using the approach of [Brox et al, 2004]. As point action in the UTI dataset is a single action, we duplicate the image as input for the networks that require both subjects, i.e., the naive fusion network, the coupled network and the tri-coupled network.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…В работе [4] удалось получить точность классификации на уровне 52,75% с помощью k-NN классификатора и 73,25% при использовании системы опорных векторов (SVM-based). В работе [5] были получены схожие результаты, однако, в качестве классификатора использовались РНС, что позволило улучшить точность классификации до 74%.…”
unclassified