2021
DOI: 10.48550/arxiv.2110.13556
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Explicit and Implicit Latent Common Spaces for Audio-Visual Cross-Modal Retrieval

Abstract: Learning common subspace is prevalent way to solve the problem of data from different modalities having inconsistent distributions and representations that cannot be directly compared when achieved in cross-modal retrieval. Previous cross-modal retrieval methods focus on projecting the data between modalities into a common latent subspace by learning the correlation between them to bridge the modality gap. However, due to the rich semantic information in the video, the heterogeneous nature of audio-visual data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 69 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?