With the adaptation of video surveillance in many areas for object detection, monitoring abnormal behavior in several cameras requires constant human tracking for a single camera operative, which is a tedious task. In multiview cameras, accurately detecting different types of guns and knives and classifying them from other video surveillance objects in real-time scenarios is difficult. Most detecting cameras are resource-constrained devices with limited computational capacities. To mitigate this problem, we proposed a resource-constrained lightweight subclass detection method based on a convolutional neural network to classify, locate, and detect different types of guns and knives effectively and efficiently in a real-time environment. In this paper, the detection classifier is a multiclass subclass detection convolutional neural network used to classify object frames into different sub-classes such as abnormal and normal. The achieved mean average precision by the best state-of-the-art framework to detect either a handgun or a knife is 84.21% or 90.20% on a single camera view. After extensive experiments, the best precision obtained by the proposed method for detecting different types of guns and knives was 97.50% on the ImageNet dataset and IMFDB, 90.50% on the open-image dataset, 93% on the Olmos dataset, and 90.7% precision on the multiview cameras. This resource-constrained device has shown a satisfactory result, with a precision score of 85.5% for detection in a multiview camera.
The commercialization and advancement of unmanned aerial vehicles (UAVs) have increased in the past decades for surveillance. UAVs use gimbal cameras and LIDAR technology for monitoring as they are resource-constrained devices that are composed of limited storage, battery power, and computing capacity. Thus, the UAV’s surveillance camera and LIDAR data must be analyzed, extracted, and stored efficiently. Video synopsis is an efficient methodology that deals with shifting foreground objects in time and domain space, thus creating a condensed video for analysis and storage. However, traditional video synopsis methodologies are not applicable for making an abnormal behavior synopsis (e.g., creating a synopsis only of the abnormal person carrying a revolver). To mitigate this problem, we proposed an early fusion-based video synopsis. There is a drastic difference between the proposed and the existing synopsis methods as it has several pressing characteristics. Initially, we fused the 2D camera and 3D LIDAR point cloud data; Secondly, we performed abnormal object detection using a customized detector on the merged data and finally extracted only the meaningful data for creating a synopsis. We demonstrated satisfactory results while fusing, constructing the synopsis, and detecting the abnormal object; we achieved an mAP of 85.97%.
With the increase in video surveillance data, techniques such as video synopsis are being used to construct small videos for analysis, thereby saving storage resources. The video synopsis framework applies in real-time environments, allowing for the creation of synopsis between multiple and single-view cameras; the same framework encompasses optimization, extraction, and object detection algorithms. Contemporary state-of-the-art synopsis frameworks are suitable only for particular scenarios. This paper aims to review the traditional state-of-the-art video synopsis techniques and understand the different methods incorporated in the methodology. A comprehensive review provides analysis of varying video synopsis frameworks and their components, along with insightful evidence for classifying these techniques. We primarily investigate studies based on single-view and multiview cameras, providing a synopsis and taxonomy based on their characteristics, then identifying and briefly discussing the most commonly used datasets and evaluation metrics. At each stage of the synopsis framework, we present new trends and open challenges based on the obtained insights. Finally, we evaluate the different components such as object detection, tracking, optimization, and stitching techniques on a publicly available dataset and identify the lacuna among the different algorithms based on experimental results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.