2021
DOI: 10.1007/s44163-021-00004-2
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Convolutional Recurrent architectures for anomaly detection in videos: a comparative study

Abstract: Convolutional Recurrent architectures are currently preferred for spatio-temporal learning tasks in videos to the 3D convolutional networks which accompany a huge computational burden and it is imperative to understand the working of different architectural configurations. But most of the current works on visual learning, especially for video anomaly detection, predominantly employ ConvLSTM networks and focus less on other possible variants of Convolutional Recurrent configurations for temporal learning which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…These methods have demonstrated their effectiveness in capturing temporal patterns and detecting video anomalies. Some notable examples include Long Short-Term Memory (LSTM) [31,32], Gated Recurrent Unit (GRU) [33,34], and Convolutional Recurrent Neural Network (CRNN) [35].…”
Section: B Recurrent Neural Network (Rnns)mentioning
confidence: 99%
“…These methods have demonstrated their effectiveness in capturing temporal patterns and detecting video anomalies. Some notable examples include Long Short-Term Memory (LSTM) [31,32], Gated Recurrent Unit (GRU) [33,34], and Convolutional Recurrent Neural Network (CRNN) [35].…”
Section: B Recurrent Neural Network (Rnns)mentioning
confidence: 99%
“…A Time Distributed Layer (TDL) [23], shown as the large rectangle in Figure 1, was selected to compare a set of M sequential images and learn, in this way, the effect of transient changes (e.g. dynamic objects) to the predicted output.…”
Section: B Deep Convolutional Neural Networkmentioning
confidence: 99%
“…The LSTM layer is introduced to learn the temporal dependence among observations, e.g. image frames in chronological order [23]. A flattening or pooling operation after the TDL is introduced since the requirement is to have only one dimension per output from the Time Distributed wrapping to insert them to the LSTM layer.…”
Section: B Deep Convolutional Neural Networkmentioning
confidence: 99%
“…In the next article, the authors Ambareesh Ravi, Fakhri Karray present a perspective, titled "Exploring Convolutional Recurrent Architectures for Anomaly Detection in Videos: a comparative study," the authors explore a variety of Convolutional Recurrent architectures and the influence of hyper-parameters on their performance for the task of anomaly detection [11].…”
Section: The Contents Of the Inaugural Issuementioning
confidence: 99%