Person specific identification is an important problem in computer vision. However, forensic video analysis is the tool in surveillance applications, such as a specific person Video Footage Record can be used to help personalised monitoring. This study proposes a solution to identify the specific person very quickly through offline which will be valuable to analyse the incident/crime earlier. The main idea of this study is to reduce the enormous volume of video data by using an object‐based video synopsis. After that, Viola–Jones face detection, deformable part based models are used to detect the face attributes. Subsequently, histogram of oriented gradients and oriented centre symmetric local binary pattern features are extracted. Support vector machine classifier is used to classify the weak and strong features. These strong features are used to recognise the person. The algorithm works well even in complicated situations such as expression changes, pose, illumination variations and even if the face is partially as well as fully occluded in few frames. The advantage of synoptic video helps to recognise the person who is not occluded in some other frames. Experimental results on benchmark and real time datasets demonstrate the effectiveness of the proposed algorithm.
Real‐time inspection and removal of individual Fusarium head blight (FHB) infected corn grains from the processing lines has been a challenging issue due to the bulk handling and smaller kernel size. In this study, four different variants (small(s), medium(m), nano(n), and large(l)) of You Only Look Once (YOLO) version 5 object detection technique were trained for the detection of Fusarium infection in a moving monolayer of touching and non‐touching corn grains. The YOLOv5 object detection models were evaluated for their performance in detecting FHB infection in individual corn grains. A heterogeneous dataset containing images and video frames of healthy and FHB infected corn grains in different illuminations was utilized. The mean average precision calculated at Intersection over Union threshold of .5 (mAP@50) was 99%, 98%, 95%, and 96% for YOLOv5‐s, YOLOv5‐m, YOLOv5‐n, and YOLOv5‐l models, respectively. The detection speed in videos was 3.9, 1.6, 9.8, and .8 frames per second for YOLOv5‐s, m, n, and l models, respectively. For non‐touching grains, all four variants of the YOLOv5 model showed 100% precision, but for touching grains, all variants showed false negatives in detection of FHB infection, especially on overlapping kernels. The recall values were found to be 98%, 99%, 96%, and 97% for YOLOv5‐s, m, n, and l models, respectively. The best combination of mAP, detection speed, and lower false negatives was achieved by the YOLOv5‐m model. YOLOv5‐m has the potential for use in real‐time detection of Fusarium infection in corn grains apart from lag time in videos.Practical ApplicationThe developed video analysis technique based on YOLOv5 object detection method will be beneficial for the accurate identification of Fusarium infected corn grains in bulk handling facilities. The individual FHB infected grains could be detected on processing lines and could be used for real‐time inspection replacing the random sampling techniques currently used, thereby preventing the entry of Fusarium mycotoxins in the food chain. For non touching corn grains, all the YOLOv5 model variants showed a 100% precision in identifying the healthy and FHB infected grains. For touching grains, YOLOv5‐m model showed the best combination of mAP, detection speed, and lower false negatives proving appropriate for inspection on moving conveyor belts. The nano model with the lightweight architecture installed in portable devices can be used for immediate detection of FHB infection without lag time.
Person reidentification (Re-ID) has been a problem recently faced in computer vision. Most of the existing methods focus on body features which are captured in the scene with high-end surveillance system. However, it is unhelpful for authentication. The technology came up empty in surveillance scenario such as in London's subway bomb blast, and Bangalore ATM brutal attack cases, even though the suspected images exist in official databases. Hence, the prime objective of this chapter is to develop an efficient facial feature-based person reidentification framework for controlled scenario to authenticate a person. Initially, faces are detected by faster region-based convolutional neural network (Faster R-CNN). Subsequently, landmark points are obtained using supervised descent method (SDM) algorithm, and the face is recognized, by the joint Bayesian model. Each image is given an ID in the training database. Based on their similarity with the query image, it is ranked with the Re-ID index. The proposed framework overcomes the challenges such as pose variations, low resolution, and partial occlusions (mask and goggles). The experimental results (accuracy) on benchmark dataset demonstrate the effectiveness of the proposed method which is inferred from the observation of receiver operating characteristic (ROC) curve and cumulative matching characteristics (CMC) curve.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.