2021
DOI: 10.48550/arxiv.2107.12021
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Language Models as Zero-shot Visual Semantic Learners

Abstract: Visual Semantic Embedding (VSE) models, which map images into a rich semantic embedding space, have been a milestone in object recognition and zero-shot learning. Current approaches to VSE heavily rely on static word embedding techniques. In this work, we propose a Visual Semantic Embedding Probe (VSEP) designed to probe the semantic information of contextualized word embeddings in visual semantic understanding tasks. We show that the knowledge encoded in transformer language models can be exploited for tasks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 41 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?