2019
DOI: 10.1007/978-3-030-26766-7_8
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Local Shape Information for Cross-Modal Person Re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…For both datasets, integrating cross-modal attention improves the recognition accuracy by a significant margin in all measures. Including cross-modal attention we outperform the reported accuracy of the local shape method (Uddin et al, 2019) in rank-1 accuracy. Since Uddin et al Uddin et al (2019) do not report the used instance split and do not perform a validation (i.e., they use 40 instead of 32 training instances as we do), the improved rank-1 performance of our method indicates the superiority of our approach.…”
Section: Experimental Analysis With Cross-modal Attentionmentioning
confidence: 82%
See 3 more Smart Citations
“…For both datasets, integrating cross-modal attention improves the recognition accuracy by a significant margin in all measures. Including cross-modal attention we outperform the reported accuracy of the local shape method (Uddin et al, 2019) in rank-1 accuracy. Since Uddin et al Uddin et al (2019) do not report the used instance split and do not perform a validation (i.e., they use 40 instead of 32 training instances as we do), the improved rank-1 performance of our method indicates the superiority of our approach.…”
Section: Experimental Analysis With Cross-modal Attentionmentioning
confidence: 82%
“…The same features will be extracted for both modalities and are compared on the basis of Euclidean distance and the additional metric learning step Cross-view Quadratic Discriminant Analysis (XQDA). Additionally, the matching of Eigen-depth and HOG/SLTP features as reported by Zhuo et al (2017) and the local shape method (Uddin et al, 2019) is included in Table 3 for the BIWI dataset. Table 3 presents the metrics of state-of-the-art and proposed networks for different scenarios on the BIWI dataset and in Table 4 the results for the RobotPKU dataset are shown.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the Re-ID literature, publicly available RGB–D and IR sensor-based datasets [ 4 , 14 , 33 , 35 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ] have been recorded for specific purposes because some Re-ID approaches use different domains combinedly to improve the performance of Re-ID [ 33 , 34 , 35 , 36 , 37 , 38 , 39 , 47 , 48 ] while other approaches use cross-domain methods [ 49 , 50 , 51 , 52 , 53 ] in the Re-ID framework to continue surveillance between daytime visible images and nighttime infrared images. Still, some Re-ID methods consider only IR–IR [ 17 , 54 ] matching using the publicly available datasets.…”
Section: Introductionmentioning
confidence: 99%