2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR) 2015
DOI: 10.1109/acpr.2015.7486476
|View full text |Cite
|
Sign up to set email alerts
|

Multi-attribute learning for pedestrian attribute recognition in surveillance scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
160
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 236 publications
(160 citation statements)
references
References 13 publications
0
160
0
Order By: Relevance
“…Sudowe et al [28] propose a holistic CNN model to jointly learn different attributes. Li et al [13] formulate pedestrian attribute recognition as a multi-label classification problem and propose an improved cross-entropy loss function. However, the performance of these holistic methods is limited due to the lack of consideration of the prior information in attributes.…”
Section: Related Workmentioning
confidence: 99%
“…Sudowe et al [28] propose a holistic CNN model to jointly learn different attributes. Li et al [13] formulate pedestrian attribute recognition as a multi-label classification problem and propose an improved cross-entropy loss function. However, the performance of these holistic methods is limited due to the lack of consideration of the prior information in attributes.…”
Section: Related Workmentioning
confidence: 99%
“…After verifing the effect of the proposed method, we compared the proposed approach with several state-of-the-art approaches, e.g. DeepMAR [Li et al, 2015], HP-net [Liu et al, 2017b], SR C-RNN [Liu et al, 2017a], VAA [Sarafianos et al, 2018], andGRL [Zhao et al, 2018]. Results of comparison models are mostly reported from their papers directly.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…To be specific, for N -way Kshot incremental few-shot learning, we first sample N attribute groups (see [Zhao et al, 2018]), then randomly choose one attribute from each of the selected N groups to form A novel , and treat the remaining attributes as A base . We follow [Li et al, 2015] and [Li et al, 2016], and divide the PETA and RAP into train/val/test set with 5 random partitions. All the reported results are the average results of these 5 partitions.…”
Section: Data Preparationmentioning
confidence: 99%
“…Base Network. Since we focus on the learning algorithm which is agnostic to the specific model, we adopt DeepMAR [Li et al, 2015] with ResNet50 as backbone as our base model M base for its competitive performances and simplicity. Hyperparameters.…”
Section: Implemetation Detailsmentioning
confidence: 99%