2021
DOI: 10.48550/arxiv.2112.13884
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision

Abstract: Using natural language as a supervision for training visual recognition models holds great promise. Recent works have shown that if such supervision is used in the form of alignment between images and captions in large training datasets, then the resulting aligned models perform well on zero-shot classification as downstream tasks 2 . In this paper, we focus on teasing out what parts of the language supervision are essential for training zero-shot image classification models. Through extensive and careful expe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 31 publications
1
1
0
Order By: Relevance
“…On the vision side, show that a bag-of-local-features model performs almost as well as their state-of-the-art counterparts. Closer to our experiments, the work of Tejankar et al (2021) shows that training contrastive vision language models using only bag-of-words in place of the caption does not significantly hurt performance on zero-shot classification. Our work generalize these results, showcasing the general limits of vision language models when dealing with relations, attributes and shuffled captions.…”
Section: Related Worksupporting
confidence: 47%
“…On the vision side, show that a bag-of-local-features model performs almost as well as their state-of-the-art counterparts. Closer to our experiments, the work of Tejankar et al (2021) shows that training contrastive vision language models using only bag-of-words in place of the caption does not significantly hurt performance on zero-shot classification. Our work generalize these results, showcasing the general limits of vision language models when dealing with relations, attributes and shuffled captions.…”
Section: Related Worksupporting
confidence: 47%
“…While it has been shown that poisoning web-scale datasets such as CC3M is practical [5], we assume that the version of CC3M we downloaded in January 2022 is clean. Although CC3M is smaller in size than the 400 million pairs used to train the original CLIP model [42], it is suitable for our storage and computational resources and has been used in multiple language-image pretraining studies [6,28,34,47,18]. Like in [42], we use a ResNet-50 model as the CLIP vision encoder and a transformer as the text encoder.…”
Section: Clip Pretrainingmentioning
confidence: 99%