2021
DOI: 10.1145/3458281
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modal Hybrid Feature Fusion for Image-Sentence Matching

Abstract: Image-sentence matching is a challenging task in the field of language and vision, which aims at measuring the similarities between images and sentence descriptions. Most existing methods independently map the global features of images and sentences into a common space to calculate the image-sentence similarity. However, the image-sentence similarity obtained by these methods may be coarse as (1) an intermediate common space is introduced to implicitly match the heterogeneous features of images and sentences i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 38 publications
(6 citation statements)
references
References 57 publications
0
0
0
Order By: Relevance
“…The image-text cross-modal retrieval task is designed to explore the correspondence between image and text. The existing matching methods can be roughly divided into two categories: graph-free paradigm [6,7,[10][11][12][13][14][18][19][20][21][22][23][24][25][26][27] and graph-based paradigm [8,9,[15][16][17][29][30][31][32][33].…”
Section: Image-text Cross-modal Retrievalmentioning
confidence: 99%
See 4 more Smart Citations
“…The image-text cross-modal retrieval task is designed to explore the correspondence between image and text. The existing matching methods can be roughly divided into two categories: graph-free paradigm [6,7,[10][11][12][13][14][18][19][20][21][22][23][24][25][26][27] and graph-based paradigm [8,9,[15][16][17][29][30][31][32][33].…”
Section: Image-text Cross-modal Retrievalmentioning
confidence: 99%
“…While these methods show promising results in image-text cross-modal retrieval, they mainly embed global feature representations and overlook the fine-grained semantic associations between image and text. To tackle this limitation, recent research concentrates on learning correspondences between image regions and text words, achieving semantic coverage from coarse-to-fine [11][12][13][14]. For instance, Xu et al [11] propose a cross-modal hybrid feature fusion method to capture interactions between image and text, which learns image-text similarity by fusing feature representation of intra-and inter-modality, providing robust semantic interactions between image regions and text words.…”
Section: Image-text Cross-modal Retrievalmentioning
confidence: 99%
See 3 more Smart Citations