2022
DOI: 10.48550/arxiv.2210.08901
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive Language-Image Pre-Training with Knowledge Graphs

Abstract: Recent years have witnessed the fast development of large-scale pre-training frameworks that can extract multi-modal representations in a unified form and achieve promising performances when transferred to downstream tasks. Nevertheless, existing approaches mainly focus on pre-training with simple image-text pairs, while neglecting the semantic connections between concepts from different modalities. In this paper, we propose a knowledge-based pre-training framework, dubbed Knowledge-CLIP, which injects semanti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 50 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?