Person re-identification has inspired a lot of interest due to its significance in intelligent video surveillance. It is a difficult task due to the presence of critical challenges such as changes in appearance, misalignment, occlusion and background noise. Batch drop block layer (BDB) has been used recently in person re-identification by exploiting the feature erasing procedure. However, BDB drops a block of features randomly, resulting in the loss of contextual information, which makes the model difficult to train. Also, due to the random dropping of features, large area of discriminative information may lose during training, resulting in low efficiency and performance. To address this problem and to improve the model representation power, we propose a novel, lightweight, self-adaptive bottleneck attention module with a self-attention branch to improve the model performance by reducing the parameter overhead with negligible computation cost. The proposed approach entails bottleneck attention module (BAM) which is incorporated between ResNet layers to remove the background noise and to nominate the high-level semantic part. Further, dilated convolutions with batch normalization are used to tackle the contextual information loss problem and to avoid overfitting. In addition, an informative global branch is used which captures the global representation of the network, and the attention branch entails the multiscale local salient information. Two types of loss functions including softmax and batch hard triplet are used in the training process for each branch, forcing the network to encapsulate the common attribute within the similar identity and to maintain distance between distinct individuals. Compared with BDB, our network improves the mAp to 88.1% , and Rank-1 gets 96.3% for the market-1501 dataset. The results on Cuhk-03-Detected dataset showed 79.2% mAp score, with 81.4 %, Rank-1, whereas on Cuhk-03-labelled dataset, a mAP score of 81.3% and a Rank-1 score of 83.3% is achieved. Experiments reveal that ResNet model with addition of multiple BAM layers performs consistently over the state-of-the-art datasets using softmax and batch hard triplet loss.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.