2019
DOI: 10.1109/tmm.2018.2870522
|View full text |Cite
|
Sign up to set email alerts
|

GLAD: Global–Local-Alignment Descriptor for Scalable Person Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
286
0
2

Year Published

2019
2019
2019
2019

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 282 publications
(288 citation statements)
references
References 61 publications
0
286
0
2
Order By: Relevance
“…As shown in Table IV, we first compare our method with the related works on the Market-1501 and DukeMTMC-reID datasets. Some approaches that try to remove the influence from the backgrounds are included, such as human landmark detection method GLAD [2], segmentation method SPReID [5] and attention-based method HA-CNN [8]. Our approach achieves 95.0% rank-1 accuracy and 84.6% mAP accuracy on the Market-1501 dataset, 88.7% rank-1 accuracy and 77.0% mAP accuracy on the DukeMTMC-reID dataset.…”
Section: E Comparison With the State-of-the-art Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…As shown in Table IV, we first compare our method with the related works on the Market-1501 and DukeMTMC-reID datasets. Some approaches that try to remove the influence from the backgrounds are included, such as human landmark detection method GLAD [2], segmentation method SPReID [5] and attention-based method HA-CNN [8]. Our approach achieves 95.0% rank-1 accuracy and 84.6% mAP accuracy on the Market-1501 dataset, 88.7% rank-1 accuracy and 77.0% mAP accuracy on the DukeMTMC-reID dataset.…”
Section: E Comparison With the State-of-the-art Methodsmentioning
confidence: 99%
“…By minimizing L, the proposed approach learns the foreground feature representations and the background feature representations simultaneously. Unlike existing works [1], [2], [3], [5], [7], [6], the prediction of the background and the training of person reidentification model are not separate. The addition of the target enhancement module and target attention loss makes the two branches couple and promote each other, which allows our model to obtain a more accurate separation of the foreground and background.…”
Section: The Overall Training Objectivementioning
confidence: 96%
See 1 more Smart Citation
“…Su et al [20] proposed a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end-to-end. Wei et al [24] also adopted the human pose estimation, or key point detection approach, in his Global-Local-Alignment Descriptor (GLAD) algorithm. The local body parts are detected and learned together with the global image by the four-stream CNN model, which yields a discriminatory and robust representation.…”
Section: Aam Heatmapmentioning
confidence: 99%
“…These points can then be used to define the bounding boxes of limbs. GLAD [13] takes advantage of the DeeperCut [14] pose estimation method, which uses a 152-layer network to predict a series of skeleton key-points. It then uses a subset of these key-points to divide a person image into three areas -head, upper-body and lower-body.…”
Section: Introductionmentioning
confidence: 99%