Fusion of appearance and motionbased sparse representations for multi-shot person re-identification. Neurocomputing, Elsevier, 2017, 248, pp.
AbstractWe present in this paper a multi-shot human re-identification system from video sequences based on interest points (IPs) matching. Our contribution is to take advantage of the complementary of person's appearance and style of its movement that leads to a more robust description with respect to various complexity factors. The proposed contributions include person's description and features matching. For person's description, we propose to exploit a fusion strategy of two complementary features provided by appearance and motion description. We describe motion using spatiotemporal IPs, and use spatial IPs for describing the appearance. For feature matching, we use Sparse Representation (SR) as a local matching method between IPs. The fusion strategy is based on the weighted sum of matched IPs votes and then applying the rule of majority vote. This approach is evaluated on a large public dataset, PRID-2011.The experimental results show that our approach clearly outperforms current state-of-the-art.