2024
DOI: 10.1109/tnnls.2021.3085978
|View full text |Cite
|
Sign up to set email alerts
|

Global-Local Multiple Granularity Learning for Cross-Modality Visible-Infrared Person Reidentification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 44 publications
(27 citation statements)
references
References 0 publications
0
27
0
Order By: Relevance
“…Regarding feature alignment, there are a lot of approaches [ 5 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 ]. The most popular architecture [ 5 , 35 , 48 ] is a double-stream deep network, where shallow layers are independent for learning modal-specific features and deep layers are shared for learning modal-common features.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding feature alignment, there are a lot of approaches [ 5 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 ]. The most popular architecture [ 5 , 35 , 48 ] is a double-stream deep network, where shallow layers are independent for learning modal-specific features and deep layers are shared for learning modal-common features.…”
Section: Related Workmentioning
confidence: 99%
“…To select useful features, Wei et al [31] designed a flexible body partition module to distinguish part representations automatically. Zhang et al concatenated the global feature and local feature to create a more powerful feature descriptor [32]. In [33], aiming to eliminate the interference of background information, the authors exploited the knowledge of human body parts to extract robust features.…”
Section: Milestones Of Existing Vi-reid Studiesmentioning
confidence: 99%
“…We can see that our AGMNet sets a new state of the art on SYSU-MM01, achieving 69.63% Rank-1 accuracy, 66.11% mAP and 52.24% mINP under all-search mode and 74.68% Rank-1 accuracy, 78.30% mAP and 74.00% mINP under indoor-search mode. Although some methods (FBP-AL [33], GLMC [45] and HTL [17]) introduce part-based convolutional features to improve retrieval performance, AGMNet still shows meaningful performance gain in terms of Rank-1/mAP/mINP (69.63% vs 64.37%, 66.11% vs 63.43% and 52.24% vs 39.54%).…”
Section: E Comparison To the State-of-the-artmentioning
confidence: 99%