COVID-19 is a disease caused by a severe respiratory syndrome coronavirus. It was identified in December 2019 in Wuhan, China. It has resulted in an ongoing pandemic that caused infected cases including many deaths. Coronavirus is primarily spread between people during close contact. Motivating to this notion, this research proposes an artificial intelligence system for social distancing classification of persons using thermal images. By exploiting YOLOv2 (you look at once) approach, a novel deep learning detection technique is developed for detecting and tracking people in indoor and outdoor scenarios. An algorithm is also implemented for measuring and classifying the distance between persons and to automatically check if social distancing rules are respected or not. Hence, this work aims at minimizing the spread of the COVID-19 virus by evaluating if and how persons comply with social distancing rules. The proposed approach is applied to images acquired through thermal cameras, to establish a complete AI system for people tracking, social distancing classification, and body temperature monitoring. The training phase is done with two datasets captured from different thermal cameras. Ground Truth Labeler app is used for labeling the persons in the images. The proposed technique has been deployed in a low-cost embedded system (Jetson Nano) which is composed of a fixed camera. The proposed approach is implemented in a distributed surveillance video system to visualize people from several cameras in one centralized monitoring system. The achieved results show that the proposed method is suitable to set up a surveillance system in smart cities for people detection, social distancing classification, and body temperature analysis.
This work presents a real-time video-based fire and smoke detection using YOLOv2 Convolutional Neural Network (CNN) in antifire surveillance systems. YOLOv2 is designed with light-weight neural network architecture to account the requirements of embedded platforms. The training stage is processed off-line with indoor and outdoor fire and smoke image sets in different indoor and outdoor scenarios. Ground truth labeler app is used to generate the ground truth data from the training set. The trained model was tested and compared to the other state-of-the-art methods. We used a large scale of fire/smoke and negative videos in different environments, both indoor (e.g., a railway carriage, container, bus wagon, or home/office) or outdoor (e.g., storage or parking area). YOLOv2 is a better option compared to the other approaches for real-time fire/smoke detection. This work has been deployed in a low-cost embedded device (Jetson Nano), which is composed of a single, fixed camera per scene, working in the visible spectral range. There are not specific requirements for the video camera. Hence, when the proposed solution is applied for safety on-board vehicles, or in transport infrastructures, or smart cities, the camera installed in closed-circuit television surveillance systems can be reused. The achieved experimental results show that the proposed solution is suitable for creating a smart and real-time video-surveillance system for fire/smoke detection.
Smoke detection represents a critical task for avoiding large scale fire disaster in industrial environment and cities. Including intelligent video-based techniques in existing camera infrastructure enables faster response time if compared to traditional analog smoke detectors. In this work presents a hybrid approach to assess the rapid and precise identification of smoke in a video sequence. The algorithm combines a traditional feature detector based on Kalman filtering and motion detection, and a lightweight shallow convolutional neural network. This technique allows the automatic selection of specific regions of interest within the image by the generation of bounding boxes for gray colored moving objects. In the final step the convolutional neural network verifies the actual presence of smoke in the proposed regions of interest. The algorithm provides also an alarm generator that can trigger an alarm signal if the smoke is persistent in a time window of 3 s. The proposed technique has been compared to the state of the art methods available in literature by using several videos of public and non-public dataset showing an improvement in the metrics. Finally, we developed a portable solution for embedded systems and evaluated its performance for the Raspberry Pi 3 and the Nvidia Jetson Nano.
This paper proposes a video-based smoke detection technique for early warning in antifire surveillance systems. The algorithm is developed to detect the smoke behavior in a restricted video surveillance environment, both indoor (e.g., railway carriage, bus wagon, industrial plant, or home/office) or outdoor (e.g., storage area or parking area). The proposed technique exploits a Kalman estimator, color analysis, image segmentation, blob labeling, geometrical features analysis, and M of N decisor, in order to extract an alarm signal within a strict real-time deadline. This new technique requires just a few seconds to detect fire smoke, and it is 15 times faster compared to the requirements of fire-alarm standards for industrial or transport systems, e.g., the EN50155 standard for onboard train fire-alarm systems. Indeed, the EN50155 considers a response time of at least 60 s for onboard systems. The proposed technique has been tested and compared with state-of-art systems using the open access Firesense dataset developed as an output of a European FP7 project, including several fire/smoke indoor and outdoor scenes. There is an improvement of all the detection metrics (recall, accuracy, F1 score, precision, etc.) when comparing Advanced Video SmokE Detection (AdViSED) with other video-based antifire works recently proposed in literature. The proposed technique is flexible in terms of input camera type and frame size and rate and has been implemented on a low-cost embedded platform to develop a distributed antifire system accessible via web browser.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.