In order to resist the adverse effect of viewpoint variations for improving vehicle re-identification performance, we design quadruple directional deep learning networks to extract quadruple directional deep learning features (QD-DLF) of vehicle images. The quadruple directional deep learning networks are with similar overall architecture, including the same basic deep learning architecture but different directional feature pooling layers. Specifically, the same basic deep learning architecture is a shortly and densely connected convolutional neural network to extract basic feature maps of an input square vehicle image in the first stage. Then, the quadruple directional deep learning networks utilize different directional pooling layers, i.e., horizontal average pooling (HAP) layer, vertical average pooling (VAP) layer, diagonal average pooling (DAP) layer and anti-diagonal average pooling (AAP) layer, to compress the basic feature maps into horizontal, vertical, diagonal and anti-diagonal directional feature maps, respectively. Finally, these directional feature maps are spatially normalized and concatenated together as a quadruple directional deep learning feature for vehicle re-identification. Extensive experiments on both VeRi and VehicleID databases show that the proposed QD-DLF approach outperforms multiple state-of-the-art vehicle re-identification methods. arXiv:1811.05163v1 [cs.CV] 13 Nov 2018Jianqing Zhu received the B.S. degree in communication engineering and the M.S. degree in communication and information system from the
Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person reidentification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2% and 9.3% higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches.978-1-4799-7824-3/15/$31.00 ©2015 IEEE ICB 2015 535
Attributes are helpful to infer high-level semantic knowledge of pedestrians, thus improving the performance of pedestrian tracking, retrieval, re-identification, etc. However, current pedestrian databases are mainly for the pedestrian detection or tracking application, and semantic attribute annotations related to pedestrians are rarely provided. In this paper, we construct an Attributed Pedestrians in Surveillance (APiS) database with various scenes. The APiS 1.0 database includes 3661 images with 11 binary and 2 multi-class attribute annotations. Moreover, we develop an evaluation protocol for researchers to evaluate pedestrian attribute classification algorithms. With the APiS 1.0 database, we present two baseline methods, one for binary attribute classification and the other for multi-class attribute classification. For binary attribute classification, we train AdaBoost classifiers with color and texture features, while for multi-class attribute classification, we adopt a weighted K Nearest Neighbors (KNN) classifier with color features. Finally, we report and discuss the baseline performance on the APiS 1.0 database following the proposed evaluation protocol.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.