Object detection from apron surveillance video is facing enormous storage pressure and computing overhead. Large cloud server cluster is generally used and high‐speed network bandwidth is required, also equipped with powerful GPUs for computing support. The design of hardware‐friendly and efficient object detection model is challenging. This paper presents a compression method for outdoor apron surveillance videos, which is further combined with a lightweight detection model to make the inference process independent of GPU. First, the gray level variance of dynamic objects is leveraged to binarize the monitoring images, then an improved MobileNet‐SSD algorithm is proposed. Moreover, int8 quantization is performed and bit operations are designed to eliminate the floating‐point operation and it can simultaneously accelerate and compress CNN models with only minor performance degradation. Experiment results on a large‐scale dataset containing 22k monitoring images demonstrate that the compression ratio of quantized image can achieve up to 21 times, combined with quantized model, the detection on apron surveillance images can reach nearly 25FPS in a pure CPU environment, the mAP is 86.83%, and the model size is compressed to 600 kb. Significantly reduced computational complexity can be applied to embedded devices.
Airport apron carries a lot of preparations for flight operation, and the advancement of its various tasks is of great significance to the flight operation. In order to build a more intelligent and easy-to-deploy airport apron operation analysis guarantee system, it is necessary to study a low-cost, fast, and real-time object detection scheme. In this article, a real-time object detection solution based on edge cloud system for airport apron operation surveillance video is proposed, which includes lightweight detection model Edge-YOLO, edge video detection acceleration strategy, and a cloud-based detection results verification mechanism. Edge-YOLO reduces the amounts of parameters and computational complexity by using model lightweight technology, which can achieve better detection speed performance on edge-end embedded devices with weak computing power, and adds an attention mechanism to compensate for accuracy loss. Edge video detection acceleration strategy achieves further detection acceleration for Edge-YOLO by utilizing the motion information of objects in the video to achieve real-time detection. Cloud-based detection results verification mechanism verifies and corrects the detection results generated by the edge through a multi-level intervention mechanism to improve the accuracy of the detection results. Through this solution, we can achieve reliable and real-time monitoring of airport apron video on edge devices with the support of a small amount of cloud computing power.
In order to accurately record the entry and departure times of helicopters and reduce the incidence of general aviation accidents, this paper proposes a helicopter entry and departure recognition method based on a self-learning mechanism, which is supplemented by a lightweight object detection module and an image classification module. The original image data obtained from the lightweight object detection module are used to construct an Automatic Selector of Data (Auto-SD) and an Adjustment Evaluator of Data Bias (Ad-EDB), whereby Auto-SD automatically generates a pseudo-clustering of the original image data. Ad-EDB then performs the adjustment evaluation and selects the best matching module for image classification. The self-learning mechanism constructed in this paper is applied to the helicopter entry and departure recognition scenario, and the ResNet18 residual network is selected for state classification. As regards the self-built helicopter entry and departure data set, the accuracy reaches 97.83%, which is 6.51% better than the bounding box detection method. To a certain extent, the strong reliance on manual annotation for helicopter entry and departure status classification scenarios is lifted, and the data auto-selector is continuously optimized using the preorder classification results to establish a circular learning loop in the algorithm.
A video coding framework for the apron surveillance scene has been proposed in this paper, which aims to improve coding efficiency by eliminating long-term redundancy at the object level. To achieve this goal, this study first develops an existing block-based hybrid video coding framework by exploiting the video redundancy on the object level to perform video coding. Second, an object-library mechanism is designed to collect the representative object images as coding references on larger temporal and spatial scales. Finally, a virtual reference frame, which blends background and foreground references from the object library, is adaptively composited according to the video content to improve the interprediction performance. Preliminary experimental results demonstrate that the proposed method achieves a high BD rate reduction of up to 23.97% in apron surveillance video sequences, compared to the standard high efficiency video coding (HEVC).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.