2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00298
|View full text |Cite
|
Sign up to set email alerts
|

Online Joint Multi-Metric Adaptation From Frequent Sharing-Subset Mining for Person Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…In Table I, we compare with 21 representative methods of weight pretraining and finetuning, which is also the current best models on Market-1501, including MSCAN [63], DF [64], SSM [65], SVDNet [66], GAN [67], PDF [68], TriNet [69], TriNet + Era. + Re-ranking [1], PCB [83], Omin [70], JointDG [71], IANet [72], CASN+PCB [73], CAMA [74], MHN-6 [75], AANet [76], P 2 -Net [77], PGFA [78], ISP [79], CBN [80], SNR [81], and M 3 +ResNet50 [82]. All settings of the above methods are consistent with the common training settings.…”
Section: A Comparison With Weight Pretraining and Finetuning In Perso...mentioning
confidence: 60%
“…In Table I, we compare with 21 representative methods of weight pretraining and finetuning, which is also the current best models on Market-1501, including MSCAN [63], DF [64], SSM [65], SVDNet [66], GAN [67], PDF [68], TriNet [69], TriNet + Era. + Re-ranking [1], PCB [83], Omin [70], JointDG [71], IANet [72], CASN+PCB [73], CAMA [74], MHN-6 [75], AANet [76], P 2 -Net [77], PGFA [78], ISP [79], CBN [80], SNR [81], and M 3 +ResNet50 [82]. All settings of the above methods are consistent with the common training settings.…”
Section: A Comparison With Weight Pretraining and Finetuning In Perso...mentioning
confidence: 60%
“…For the fourth query image, we can see that the current dataset contains a large number of similar images of pedestrians (ranks 1-10). Even with human eyes, it is impossible to distinguish whether these images belong to the same person, but our method can still obtain relatively [53] ResNet152 85.9 73.3 PCB+RPP (ECCV18) [38] ResNet50 83.3 69.2 DuATM (CVPR18) [37] DenseNet121 81.8 64.6 PSE+ECN(CVPR18) [36] ResNet50 84.5 75.7 AANet (CVPR19) [17] ResNet152 87.7 74.3 DCDS(ICCV19) [46] ResNet101 87.6 75.5 CASN(CVPR19) [55] ResNet50 87.7 73.7 HPM (AAAI19) [66] ResNet50 86.6 74.3 MHN-PCB(ICCV19) [45] ResNet50 89.1 77.2 OSNet (ICCV19) [48] OSNET 88.6 73.5 MGN(ACMMM18) [42] ResNet50 88.7 78.4 ABDNet (ICCV19) [56] ResNet50 89.0 78.6 GCP(AAAI20) [47] ResNet50 89.7 78.6 SAN(AAAI20) [49] ResNet50 87.9 75.5 3DTANet (TCSVT20) [50] -89.9 78.4 M3+ ResNet50(CVPR20) [54] ResNet50 84.7 68.5 M3+DenseNet121(CVPR20) [54] DenseNet121 84.9 68.0 HOReID (CVPR20) [51] ResNet50 86.9 75.6 Ours ResNet50 90.2 80.2…”
Section: Methodsmentioning
confidence: 99%
“…Evaluation Metrics For the ReID task, we follow the same evaluation protocols which are used in previous works [27,52,55]. Mean Average Precision (mAP) and the cumulative matching characteristics at Rank1 (CMC@1) are employed to evaluate the performance of PGVR.…”
Section: Datasets and Evaluation Metricsmentioning
confidence: 99%