2022
DOI: 10.3390/electronics11071016
|View full text |Cite
|
Sign up to set email alerts
|

Transformer-Based Attention Network for Vehicle Re-Identification

Abstract: Vehicle re-identification (ReID) focuses on searching for images of the same vehicle across different cameras and can be considered as the most fine-grained ID-level classification task. It is fundamentally challenging due to the significant differences in appearance presented by a vehicle with the same ID (especially from different viewpoints) coupled with the subtle differences between vehicles with different IDs. Spatial attention mechanisms that have been proven to be effective in computer vision tasks als… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…It can be mainly divided into supervised, metric and unsupervised learning methods. Methods based on supervised learning are further divided into methods based on global features [18,19], local features [20,21] and attention mechanism [22]. The goal of metric learning is to learn a mapping from the original features to the embedding space, such that the objects of the same category are close in the embedding space, and the distance between different categories is far away.…”
Section: Vehicle Re-identificationmentioning
confidence: 99%
“…It can be mainly divided into supervised, metric and unsupervised learning methods. Methods based on supervised learning are further divided into methods based on global features [18,19], local features [20,21] and attention mechanism [22]. The goal of metric learning is to learn a mapping from the original features to the embedding space, such that the objects of the same category are close in the embedding space, and the distance between different categories is far away.…”
Section: Vehicle Re-identificationmentioning
confidence: 99%
“…For example, Zhu et al [113] added self-attentive models to each branch of the CNN network for fine-grained recognition of vehicles. To reduce the influence of noise in the image, Lian et al [114] used the attention network based on transformer to extract the global features of vehicle re-ID. Jiang et al [115] studied a global reference attention network.…”
Section: Vehicle Re-identification Based On Attention Mechanismmentioning
confidence: 99%
“…Other works enhanced CNNs using Transformers. TANet [80] proposes an attentionbased CNN to explore long-range dependencies. The method is composed of three branches:…”
Section: Transformers In Vehicle Re-identificationmentioning
confidence: 99%
“…Given the base learning rate (LR base ), we spent 10 epochs linearly increasing LR × 10 −1 → LR. Unless mentioned otherwise, cosine annealing was used as the learning rate scheduler [47,80,96]. Table 6.…”
Section: Baseline Modelmentioning
confidence: 99%