2016
DOI: 10.1109/tpami.2016.2522418
|View full text |Cite
|
Sign up to set email alerts
|

Person Re-Identification by Discriminative Selection in Video Ranking

Abstract: Current person re-identification (ReID) methods typically rely on single-frame imagery features, whilst ignoring space-time information from image sequences often available in the practical surveillance scenarios. Single-frame (single-shot) based visual appearance matching is inherently limited for person ReID in public spaces due to the challenging visual ambiguity and uncertainty arising from non-overlapping camera views where viewing condition changes can cause significant people appearance variations. In t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
154
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 225 publications
(155 citation statements)
references
References 54 publications
(129 reference statements)
1
154
0
Order By: Relevance
“…Supervised person re-id Most existing person re-id models are created by supervised learning methods on a separate set of cross-camera identity labelled training data (Wang et al, 2014b(Wang et al, , 2016bZhao et al, 2017;Chen et al, 2017;Li et al, 2017;Chen et al, 2018b;Li et al, 2018b;Song et al, 2018;Chang et al, 2018;Sun et al, 2018;Shen et al, 2018a;Wei et al, 2018;Hou et al, 2019;Zheng et al, 2019;Zhang et al, 2019;Quan et al, 2019;Zhou et al, 2019). Relying on the strong supervision of cross-camera identity labelled training data, they have achieved remarkable performance boost.…”
Section: Related Workmentioning
confidence: 99%
“…Supervised person re-id Most existing person re-id models are created by supervised learning methods on a separate set of cross-camera identity labelled training data (Wang et al, 2014b(Wang et al, , 2016bZhao et al, 2017;Chen et al, 2017;Li et al, 2017;Chen et al, 2018b;Li et al, 2018b;Song et al, 2018;Chang et al, 2018;Sun et al, 2018;Shen et al, 2018a;Wei et al, 2018;Hou et al, 2019;Zheng et al, 2019;Zhang et al, 2019;Quan et al, 2019;Zhou et al, 2019). Relying on the strong supervision of cross-camera identity labelled training data, they have achieved remarkable performance boost.…”
Section: Related Workmentioning
confidence: 99%
“…The periodicity of pedestrians gate is exploited also in [24] to generate a spatio-temporal bodyaction model made up of a series of action primitives of certain body parts treated independently from each other. The strict alignment assumptions made by gait recognition techniques are relaxed in [47,61] where unregulated videosequences are automatically broken down based on motion energy profiling (e.g., optical flow). With the spreading of the deep learning paradigm, temporal-based approaches have emerged using RNNs (combined with Convolutional Neural Networks) in Siamese configuration [30,56,52,71].…”
Section: Video-sequences Alignment Techniquesmentioning
confidence: 99%
“…For the video-based person re-id, given a query sequence of images (a.k.a., tracklet) of the target of interest, the challenge consists of identifying all the corresponding matching tracklets captured across the network. Dealing with full sequences of images as opposed to single images offers significant advantages: a) exploiting the temporal dependencies between intra-sequence frames [52,30,56,71]; b) extracting more robust spatial appearance descriptors [64,51,58]; c) partially recovering from occlusions and reducing the influence of the background [46,47,13]. All these three aspects contribute to reducing the impact of the factors affecting performance (changing pose/viewpoint, lighting, occlusions, etc.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For test and training data ten random splits (cf. (Gray et al, 2007)) of often cited datasets VIPeR (Gray et al, 2007), prid4502 (Roth et al, 2014), and i-lids (Wang et al, 2016) (Wang et al, 2014) were analysed. The same splits are used for every down-sampled trial.…”
Section: Re-id Performance On Down-sampled Imagesmentioning
confidence: 99%