Back to Search Start Over

CCMB: A Large-scale Chinese Cross-modal Benchmark

Authors :
Xie, Chunyu
Cai, Heng
Li, Jincheng
Kong, Fanjing
Wu, Xiaoyu
Song, Jianfei
Morimitsu, Henrique
Yao, Lin
Wang, Dexin
Zhang, Xiangzheng
Leng, Dawei
Zhang, Baochang
Ji, Xiangyang
Deng, Yafeng
Xie, Chunyu
Cai, Heng
Li, Jincheng
Kong, Fanjing
Wu, Xiaoyu
Song, Jianfei
Morimitsu, Henrique
Yao, Lin
Wang, Dexin
Zhang, Xiangzheng
Leng, Dawei
Zhang, Baochang
Ji, Xiangyang
Deng, Yafeng
Publication Year :
2022

Abstract

Vision-language pre-training (VLP) on large-scale datasets has shown premier performance on various downstream tasks. In contrast to plenty of available benchmarks with English corpus, large-scale pre-training datasets and downstream datasets with Chinese corpus remain largely unexplored. In this work, we build a large-scale high-quality Chinese Cross-Modal Benchmark named CCMB for the research community, which contains the currently largest public pre-training dataset Zero and five human-annotated fine-tuning datasets for downstream tasks. Zero contains 250 million images paired with 750 million text descriptions, plus two of the five fine-tuning datasets are also currently the largest ones for Chinese cross-modal downstream tasks. Along with the CCMB, we also develop a VLP framework named R2D2, applying a pre-Ranking + Ranking strategy to learn powerful vision-language representations and a two-way distillation method (i.e., target-guided Distillation and feature-guided Distillation) to further enhance the learning capability. With the Zero and the R2D2 VLP framework, we achieve state-of-the-art performance on twelve downstream datasets from five broad categories of tasks including image-text retrieval, image-text matching, image caption, text-to-image generation, and zero-shot image classification. The datasets, models, and codes are available at https://github.com/yuxie11/R2D2

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333769573
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1145.3581783.3611877