2021
DOI: 10.4018/ijcini.287601
|View full text |Cite
|
Sign up to set email alerts
|

Violence Detection With Two-Stream Neural Network Based on C3D

Abstract: In recent years, violence detection has gradually turned into an important research area in computer vision, and have proposed many models with high accuracy. However, the unsatisfactory generalization ability of these methods over different datasets. In this paper, the authors propose a violence detection method based on C3D two-stream network for spatiotemporal features. Firstly, the authors preprocess the video data of RGB stream and optical stream respectively. Secondly, the authors feed the data into two … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…Because the research process will encounter many different factors, such as the difference of action performance, the difference of environment, the difference of time and so on [ 3 ]. In order to solve these differences, some excellent feature representation methods have been proposed, among which the most popular and advanced one is deep learning representation method [ 4 ]. Especially in motion recognition, different motion types show great differences in appearance and motion model.…”
Section: Introductionmentioning
confidence: 99%
“…Because the research process will encounter many different factors, such as the difference of action performance, the difference of environment, the difference of time and so on [ 3 ]. In order to solve these differences, some excellent feature representation methods have been proposed, among which the most popular and advanced one is deep learning representation method [ 4 ]. Especially in motion recognition, different motion types show great differences in appearance and motion model.…”
Section: Introductionmentioning
confidence: 99%