2020
DOI: 10.1007/978-3-030-58568-6_20
|View full text |Cite
|
Sign up to set email alerts
|

READ: Reciprocal Attention Discriminator for Image-to-Video Re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…[131, [133][134][135][136][137][138][139][140] Fig. 3 Channel, spatial, and temporal attention can be regarded as operating on different domains.…”
Section: Introductionmentioning
confidence: 99%
“…[131, [133][134][135][136][137][138][139][140] Fig. 3 Channel, spatial, and temporal attention can be regarded as operating on different domains.…”
Section: Introductionmentioning
confidence: 99%
“…Multiscale approaches require significantly more operations, and therefore slow down inference, which is contrary to our goal. Attention has also been applied to video-based reid on streams of images [45], [46], [47], [48], [49], [50], [51], [52], [53], [54]. These approaches primarily focus on temporal attention.…”
Section: Related Workmentioning
confidence: 99%
“…As for the I2V person Re-ID, some technologies [8,37] have been studied. Gu et al [7] propose a novel temporal knowledge propagation (TKP) method that first utilizes video representation network to learn the temporal knowledge and then propagates it to the image representation network.…”
Section: Cross-modality Person Re-idmentioning
confidence: 99%
“…However, due to the modality discrepancy, images and videos corresponding to the same identity label may be far away from each other. To this end, existing methods [7,8] mainly develop specific network architectures or training objectives for the feature alignment of images and videos in the shared space. For example, READ [8] proposed an attention-aware discriminator architecture that chooses to aggregate useful spatio-temporal information; TKP [7] devised a temporal knowledge propagation method, which transfers the temporal knowledge of videos to images by a feature-based TKP loss and a distance-based TKP loss.…”
Section: Introductionmentioning
confidence: 99%