2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.149
|View full text |Cite
|
Sign up to set email alerts
|

Person Re-identification by Multi-Channel Parts-Based CNN with Improved Triplet Loss Function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
836
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,111 publications
(838 citation statements)
references
References 34 publications
2
836
0
Order By: Relevance
“…Only K-LFDA when trained with mom LE [24] feature attains comparable performance than DMN. However, motivated to resolve the challenges for reidentification in real world (i.e., multimodal image space, and diverse impostors) IRM3 + CVI ( = 15) has much better results than MCP-CNN [39], E2E-CAN [31], Quadruplet-Net [33], and JLML [34], while our IRM3 + CVI ( = 15) has 1.49% higher rank@1 than DLPA [32]. DLPA extracts deep features by semantically aligning body parts, as well as rectifying pose variations.…”
Section: Results On Cuhk01mentioning
confidence: 99%
See 1 more Smart Citation
“…Only K-LFDA when trained with mom LE [24] feature attains comparable performance than DMN. However, motivated to resolve the challenges for reidentification in real world (i.e., multimodal image space, and diverse impostors) IRM3 + CVI ( = 15) has much better results than MCP-CNN [39], E2E-CAN [31], Quadruplet-Net [33], and JLML [34], while our IRM3 + CVI ( = 15) has 1.49% higher rank@1 than DLPA [32]. DLPA extracts deep features by semantically aligning body parts, as well as rectifying pose variations.…”
Section: Results On Cuhk01mentioning
confidence: 99%
“…Since the positive samples for each person are too scarce compared to the number of negative samples, therefore, following the protocol of data augmentation in [49] we augment each person pair five times. Similarly, following the protocol in [39] we generate 20 triplets for each positive pair. Now, the triplet samples imp and Ng for person using impostor and negative Gallery are given as…”
Section: Triplet Formationmentioning
confidence: 99%
“…As showed in figure [8] , all the network share the same parameters W. we extracted the discriminative features from the raw images into a learned feature space. This is like .…”
Section: The Overall Structurementioning
confidence: 99%
“…use the same attributes as [83] when testing on PRID [65] dataset). Evaluated on four sepereate datasets (iLIDS, PRID, VIPeR, SAIVT-SoftBio), they are consistently able to outperform the techniques they evaluate against with particularly positive results achieved on the iLIDS dataset where they were able to achieve an accuracy rate of almost 100% at rank 50. person from an enrolled image including siamese convolutional neural networks (SCNN) [145,160] and multi channel CNN models such as triplets [24,93]. However in the majority of this research the computational expense exceeds that witnessed in the hand crafted counterparts making them currently difficult to run on real time systems.…”
Section: Person Re-identificationmentioning
confidence: 99%
“…Using this stored background model, the similarity of the template to the target location in the background is computed, 24) and use this similarity to determine if the particle is a better match to the background or the foreground,…”
Section: Particle Based Searchmentioning
confidence: 99%