2020
DOI: 10.1016/j.patrec.2019.11.032
|View full text |Cite
|
Sign up to set email alerts
|

Scale-invariant batch-adaptive residual learning for person re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…This is possible with the help of supervised distance learning and the Siamese convolution network architecture [56]. Due to its simplicity, easy design and low complexity, this model gains attention among the scientific community and has been successfully adopted for weakly supervised metric learning [38], signature verification [57], person re-identification [58] and face verification [34], where the number of labelled instances per classes in a datasets is not enough to train a traditional CNN classifier. Due to the availability of few training samples and the existence of both pixel-wise and channel-wise correlations, the network is enforced to relearn redundant information during the back-propagation stage, hampering the effective learning of the deep model.…”
Section: Proposed Sicodef 2 Net Modelmentioning
confidence: 99%
“…This is possible with the help of supervised distance learning and the Siamese convolution network architecture [56]. Due to its simplicity, easy design and low complexity, this model gains attention among the scientific community and has been successfully adopted for weakly supervised metric learning [38], signature verification [57], person re-identification [58] and face verification [34], where the number of labelled instances per classes in a datasets is not enough to train a traditional CNN classifier. Due to the availability of few training samples and the existence of both pixel-wise and channel-wise correlations, the network is enforced to relearn redundant information during the back-propagation stage, hampering the effective learning of the deep model.…”
Section: Proposed Sicodef 2 Net Modelmentioning
confidence: 99%
“…At [30,54] it is stated that the residual networks are easy to optimize. However residual networks have less parameter than other deep learning methods [53].…”
Section: Model-4: Residual Networkmentioning
confidence: 99%
“…Secondly, they identified the optimal metric combination, including cosine distance metric, further reducing intra-class differences and thus addressing lighting and viewpoint change issues. On the other hand, Sikdar et al [10] transformed inputs into pyramids of different resolutions, then inputted multiple differently scaled feature maps into the network for learning, and finally adjusted back to the original size before performing max-pooling operations. This processing method has been demonstrated to yield scale-invariant results for re-identification systems.…”
Section: Introductionmentioning
confidence: 99%