2013
DOI: 10.1007/978-3-642-37331-2_3
|View full text |Cite
|
Sign up to set email alerts
|

Human Reidentification with Transferred Metric Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
328
0
1

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 423 publications
(329 citation statements)
references
References 28 publications
0
328
0
1
Order By: Relevance
“…We utilize the pedestrian images from the public available dataset CUHK Person Re-identification Dataset [4] to evaluate our approach. The pedestrian images in this dataset are collected from the campus where the bag appearance and location change greatly, as seen in Fig.8.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We utilize the pedestrian images from the public available dataset CUHK Person Re-identification Dataset [4] to evaluate our approach. The pedestrian images in this dataset are collected from the campus where the bag appearance and location change greatly, as seen in Fig.8.…”
Section: Methodsmentioning
confidence: 99%
“…In visual surveillance, people are interested in automatically searching persons from a huge amount of video data [1][2][3][4][5][6][7][8][9][10][11]. Because bag is a very common target appeared in surveillance video from public areas such as streets, subways, tourist attractions, airports and supermarkets, mining bag information is conducive to criminals monitoring, lost person search, video index and criminal investigation, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…CUHK01 [15] consists of front view and back view images of 972 people which are used as gallery and probe images in the experiment. The images in CUHK01 are resized to 160 × 60 for standardization.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…In this work, we extensively consider four state-of-the-art person re-identification methods including SDALF [6], QAF [33], Mid-level filter [32], and SDC knn [30]. These methods were evaluated on three different benchmark datasets: VIPeR [10], ETHZ [5,26], and CUHK01 [15]. By doing this, we expect to provide a comprehensive evaluation of the proposed approach.…”
Section: Introductionmentioning
confidence: 99%
“…We train these models on a variety of large benchmark datasets including VIPER [28] (632 distinct persons in [128x48] crops), PRID [13] (200 distinct persons), GRID [22] (250 persons) and CUHK [21] (971 persons). We resample all detections to match VIPeR's dimensions.…”
Section: Classifier Training Representation and Datasetsmentioning
confidence: 99%