2020
DOI: 10.48550/arxiv.2002.04414
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Diversity-Achieving Slow-DropBlock Network for Person Re-Identification

Abstract: A big challenge of person re-identification (Re-ID) using a multi-branch network architecture is to learn diverse features from the ID-labeled dataset. The 2-branch Batch DropBlock (BDB) network was recently proposed for achieving diversity between the global branch and the feature-dropping branch. In this paper, we propose to move the dropping operation from the intermediate feature layer towards the input (image dropping). Since it may drop a large portion of input images, this makes the training hard to con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…Batch DropBlock Network (BDB) [5] uses a global branch and a feature dropping branch to keep the global salient representations and reinforce the attentive feature learning of local regions. Wu [34] uses multiple dropping branches on the basis of BDB to further boost the performance.…”
Section: Dropblockmentioning
confidence: 99%
“…Batch DropBlock Network (BDB) [5] uses a global branch and a feature dropping branch to keep the global salient representations and reinforce the attentive feature learning of local regions. Wu [34] uses multiple dropping branches on the basis of BDB to further boost the performance.…”
Section: Dropblockmentioning
confidence: 99%
“…Person Re-ID has made remarkable achievements in the training and testing of single-domain data sets [8,9]. However, in the cross-domain case (training on a source domain, testing on a target domain) but the performance of the model will be greatly compromised if a new data set is re-identified by using the pre-trained model directly.…”
Section: Introductionmentioning
confidence: 99%