2021
DOI: 10.48550/arxiv.2103.15537
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cloth-Changing Person Re-identification from A Single Image with Gait Prediction and Regularization

Abstract: Cloth-Changing person re-identification (CC-ReID) aims at matching the same person across different locations over a long-duration, e.g., over days, and therefore inevitably meets challenge of changing clothing. In this paper, we focus on handling well the CC-ReID problem under a more challenging setting, i.e., just from a single image, which enables high-efficiency and latency-free pedestrian identify for real-time surveillance applications. Specifically, we introduce Gait recognition as an auxiliary task to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 72 publications
(145 reference statements)
0
2
0
Order By: Relevance
“…HACNN [31], PCB [40], and IANet [20]) and six clothes-changing re-id methods (i.e. SPT+ASE [49], GI-ReID [28], CESD [35], RCSANet [25], 3DSL [6], and FSAM [18]) on LTCC and PRCC in Tab. 2.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…HACNN [31], PCB [40], and IANet [20]) and six clothes-changing re-id methods (i.e. SPT+ASE [49], GI-ReID [28], CESD [35], RCSANet [25], 3DSL [6], and FSAM [18]) on LTCC and PRCC in Tab. 2.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…To this end, [52,55] attempts to use disentangled representation learning to decouple appearance and structural information from RGB images, and considers structural information as clothes-irrelevant features. In contrast, other researchers attempt to use multi-modality information (e.g., skeletons [35], silhouettes [18,28], radio signals [7], contour sketches [49], or 3D shape [6]) to model body shape and extract clothesirrelevant features. However, the training of disentangled representation learning is time-consuming, and multimodality-based methods need additional models or equipment to extract multi-modality information.…”
Section: Related Workmentioning
confidence: 99%