2021
DOI: 10.48550/arxiv.2111.12084
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Abstract: Transformer-based supervised pre-training achieves great performance in person re-identification (ReID). However, due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset (e.g. ImageNet-21K) to boost the performance because of the strong data fitting ability of the transformer. To address this challenge, this work targets to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure, respectively. We first investi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…Compared with the currently best-published method, TransReID-SSL [51], the proposed method surpassed it by 3.9% on Market-1501 and 9.3% on DukeMTMC-reID in mAP.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 81%
“…Compared with the currently best-published method, TransReID-SSL [51], the proposed method surpassed it by 3.9% on Market-1501 and 9.3% on DukeMTMC-reID in mAP.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 81%
“…TransReID (He et al 2021) is the first pure transformerbased method on re-ID, it proposes a jigsaw patches module (JPM) which shuffles patch embeddings and re-groups them for further feature learning to extract several local features and aggregates them to get robust feature with global context. TransReID-SSL (Luo et al 2021) uses a massive person re-ID dataset LU P erson (Fu et al 2021) to train a stronger pre-trained model by DINO (Caron et al 2021).…”
Section: Representation Learning In Re-idmentioning
confidence: 99%
“…Based on ViT [41], [39] applies pure Transformer to supervised ReID for the first time, which introduces side information to improve the robustness of features. [47] further proposed self-supervised pre-training for Transformer-based person ReID, which mitigates the gap between the pre-training and ReID datasets from the perspective of data and model structure.…”
Section: Transformer-related Person Reidmentioning
confidence: 99%
“…Vision Transformer usually yields better generalization ability than common CNN networks under distribution shift [49]. However, existing pure transformerbased ReID models are only used in supervised and pretrained ReID [47,39]. The generalization of Transformer is still unknown in DG ReID.…”
Section: Introductionmentioning
confidence: 99%