2020
DOI: 10.1109/access.2020.3024120
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Person Re-Identification Model Based on New Regularization Technique

Abstract: The aim of person re-identification (ReID) is to recognize the same persons across different scenes. Due to the many demanding applications that utilize large-scale data, more and more attention has been devoted to matching efficiency and accuracy. Many methods that are based on binary coding have been presented to reach efficient ReID. Those methods learn projections to map the high-dimensional features into deep neural networks or compact binary codes through simple insertion of an extra fully connected laye… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 47 publications
0
3
0
Order By: Relevance
“…This approach prevents the learning model from using a wide range of weight space by multiplying the standard deviation of the weight matrix by its parameter to obtain its regularization term. The standard deviation measures the spread of the weight values within the matrix and is computed by taking the square root of the variance of the weights [ [31] , [32] , [33] , [34] , [35] ]. …”
Section: Methodsmentioning
confidence: 99%
“…This approach prevents the learning model from using a wide range of weight space by multiplying the standard deviation of the weight matrix by its parameter to obtain its regularization term. The standard deviation measures the spread of the weight values within the matrix and is computed by taking the square root of the variance of the weights [ [31] , [32] , [33] , [34] , [35] ]. …”
Section: Methodsmentioning
confidence: 99%
“…us, the regularizer prevents the learning model from taking values from the weight space that are too widely distributed. In fact, the new regularizer has been extensively tested on various tasks with different datasets and proved to be more effective than other regularization methods [57,[76][77][78][79][80].…”
Section: Proposed Modelmentioning
confidence: 99%
“…Inspired by this, we advocate the use of Wasserstein loss to minimize the Wasserstein distance between the data distribution of low-coupling binary codes and the feature distribution of the derived binary local descriptors. Although the Wasserstein loss has been successfully employed in applications like person re-identification [37], [38], it has never been employed to learn binary local descriptors yet.…”
Section: Low-coupling Binary Local Descriptorsmentioning
confidence: 99%