2021
DOI: 10.1155/2021/9965781
|View full text |Cite
|
Sign up to set email alerts
|

Motion Direction Inconsistency‐Based Fight Detection for Multiview Surveillance Videos

Abstract: Nowadays, with the increasing number of surveillance cameras, human behavior detection is of importance for public security. Detection of fight behavior using video surveillance is an essential and challenging research field. We propose a multiview fight detection method based on statistical characteristics of the optical flow and random forest. Cyberphysical systems for monitoring can obtain timely and accurate information from this method. Two novel descriptors named Motion Direction Inconsistency (MoDI) and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…In [5] The paper introduces a multi-view fight detection method for surveillance videos, addressing challenges like varying shooting views and potential misjudgments. Leveraging optical flow analysis and random forest classification, the system computes novel descriptors and achieves improved accuracy, reduced false alarms, and robustness against different viewpoints on the CASIA dataset.…”
Section: Literature Surveymentioning
confidence: 99%
“…In [5] The paper introduces a multi-view fight detection method for surveillance videos, addressing challenges like varying shooting views and potential misjudgments. Leveraging optical flow analysis and random forest classification, the system computes novel descriptors and achieves improved accuracy, reduced false alarms, and robustness against different viewpoints on the CASIA dataset.…”
Section: Literature Surveymentioning
confidence: 99%
“…Traditional approaches often rely on handcrafted features, rule-based systems, and predefined thresholds to identify violent actions Bermejo Nievas et al ( 2011), Febin et al (2020), Senst et al (2017), Yao et al (2021), Ye et al (2020), Gao et al (2016). The approach by Febin et al (2020) extracts MoBSIFT features which calculates the SIFT features for motion features.…”
Section: Related Workmentioning
confidence: 99%
“…The accuracy rate of the HF dataset employed in their experiments was 92%. Yao et al [17] used a sparse optical flow algorithm to extract motion magnitude and orientation features around key points of interest. Khalil et al [18] divided the frame into 108 blocks and used a block-matching algorithm to extract one motion vector for each block, representing the displacement of each block of pixels between two consecutive frames.…”
Section: Handcrafted Featuresmentioning
confidence: 99%