Vision-language pre-training (VLP) on large-scale datasets has shown premier performance on various downstream tasks. A complete and fair benchmark (i.e., including large-scale pre-training datasets and diverse downstream tasks) is essential for VLP. While there are plenty of benchmarks with English corpus, building a rich benchmark for VLP with other languages, such as Chinese, remains a critical problem. To this end, we build a large-scale Chinese cross-modal benchmark called Zero for the research community to fairly compare VLP models. We release two pre-training datasets and five fine-tuning datasets for downstream tasks. Alongside, we propose a novel pre-training framework of pre-Ranking + Ranking for cross-modal learning. Specifically, we apply global contrastive pre-ranking to learn the individual representations of images and texts, respectively. We then fuse the representations in a fine-grained ranking manner via an image-text cross encoder and a text-image cross encoder. To further enhance the capability of the model, we propose a two-way distillation strategy consisting of target-guided Distillation and feature-guided Distillation. For brevity, we name our model R2D2. We achieve state-of-the-art performance on four public cross-modal datasets and the proposed five downstream datasets. When conducting zero-shot tasks on Flickr30k-CN, COCO-CN, and MUGE, R2D2 pre-trained on a 250 million dataset achieves significant improvements of 4.7%, 5.4%, and 6.3% in mean recall compared to the state-of-the-art. The datasets, models, and codes are available at https://github.com/yuxie11/R2D2.
IntroductionVision-language pre-training (VLP) mainly learns the semantic correspondence between vision and natural language. Seminal works [15,17,24,26,30] explore the VLP model and achieve significant improvement on various vision-language tasks, supported by massive data [25], excellent architectures such as Transformer [28], cross-modal models such as CLIP [24], hardware equipment, etc. In this paper, we focus on large-scale vision-language data and cross-modal learning.With a large-scale training corpus (mainly in English language), VLP models have shown to be beneficial for downstream tasks [15,21]. However, existing Chinese vision-language datasets are rare and with various limitations. For instance, M6-Corpus [20] is a multi-modal pre-training dataset but is still not publicly available. Wukong [9] is a newly published cross-modal pre-training dataset Preprint. Under review.