2018
DOI: 10.1109/tip.2018.2819820
|View full text |Cite
|
Sign up to set email alerts
|

Vehicle Re-Identification by Deep Hidden Multi-View Inference

Abstract: Vehicle re-identification (re-ID) is an area that has received far less attention in the computer vision community than the prevalent person re-ID. Possible reasons for this slow progress are the lack of appropriate research data and the special 3D structure of a vehicle. Previous works have generally focused on some specific views (e.g., front); but, these methods are less effective in realistic scenarios, where vehicles usually appear in arbitrary views to cameras. In this paper, we focus on the uncertainty … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
73
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 113 publications
(73 citation statements)
references
References 40 publications
0
73
0
Order By: Relevance
“…Although the rank-5 identification rate of the proposed QD-DLF method is a bit lower than that of PROVID [1], the proposed QD-DLF method achieves much higher MAP and rank-1 identification rate, and thus is still superior to PROVID [1]. Secondly, compared with those single modal deep learning based vehicle re-identification methods (i.e., NuFACT [1], DenseNet121 [25], SCCN-Ft+CLBL-8-Ft [12], ABLN-Ft-16 [11], FACT [2], GoogLeNet [26] and VGG-CNN-M-1024 [3]), the proposed QD-DLF methods shows a larger accuracy improvement. Specifically, the best single modal deep learning based vehicle re-identification method, i.e., NuFACT [1], only obtains a 48.47% MAP, a 76.76% rank-1 identification rate and a 91.42% rank-5 identification rate, which are much lower than those of the proposed QD-DLF method.…”
Section: Performance Evaluation 1) Comparison On Verimentioning
confidence: 93%
See 1 more Smart Citation
“…Although the rank-5 identification rate of the proposed QD-DLF method is a bit lower than that of PROVID [1], the proposed QD-DLF method achieves much higher MAP and rank-1 identification rate, and thus is still superior to PROVID [1]. Secondly, compared with those single modal deep learning based vehicle re-identification methods (i.e., NuFACT [1], DenseNet121 [25], SCCN-Ft+CLBL-8-Ft [12], ABLN-Ft-16 [11], FACT [2], GoogLeNet [26] and VGG-CNN-M-1024 [3]), the proposed QD-DLF methods shows a larger accuracy improvement. Specifically, the best single modal deep learning based vehicle re-identification method, i.e., NuFACT [1], only obtains a 48.47% MAP, a 76.76% rank-1 identification rate and a 91.42% rank-5 identification rate, which are much lower than those of the proposed QD-DLF method.…”
Section: Performance Evaluation 1) Comparison On Verimentioning
confidence: 93%
“…Moreover, it can be seen that SCCN-Ft+CLBL-8-Ft [12] and ABLN-Ft-16 [11] do not obviously show the superiority on the VeRi database, although they specially consider the viewpoint variation. This is because each vehicle in the VeRi database is not densely captured by different camera viewpoints, which does not fully meet the requirement of the training data for CCN-Ft+CLBL-8-Ft [12] and ABLN-Ft-16 [11], limiting the performance improvement.…”
Section: Performance Evaluation 1) Comparison On Verimentioning
confidence: 96%
“…The method works well for the pair of images that are spatially and temporally close to each other. Zhou et al (2018) addressed multi-view V-reID problem by generating multi-view features for each query image which can be considered as a descriptive representation containing all information from the multiple views. The method extracts features from one image that belong to one view.…”
Section: Deep Feature Based Methodsmentioning
confidence: 99%
“…9, different vehicles, from VeRi-776 dataset, in terms of color, type, model, and different viewpoints are shown. Zhou et al (2018) collected Toy Car ReID dataset. This is the first synthetic vehicle dataset collected in an indoor environment using multiple cameras.…”
Section: Veri-776mentioning
confidence: 99%
See 1 more Smart Citation