Back to Search Start Over

Vision-Language Intelligence: Tasks, Representation Learning, and Large Models

Authors :
Li, Feng
Zhang, Hao
Zhang, Yi-Fan
Liu, Shilong
Guo, Jian
Ni, Lionel M.
Zhang, PengChuan
Zhang, Lei
Publication Year :
2022

Abstract

This paper presents a comprehensive survey of vision-language (VL) intelligence from the perspective of time. This survey is inspired by the remarkable progress in both computer vision and natural language processing, and recent trends shifting from single modality processing to multiple modality comprehension. We summarize the development in this field into three time periods, namely task-specific methods, vision-language pre-training (VLP) methods, and larger models empowered by large-scale weakly-labeled data. We first take some common VL tasks as examples to introduce the development of task-specific methods. Then we focus on VLP methods and comprehensively review key components of the model structures and training methods. After that, we show how recent work utilizes large-scale raw image-text data to learn language-aligned visual representations that generalize better on zero or few shot learning tasks. Finally, we discuss some potential future trends towards modality cooperation, unified representation, and knowledge incorporation. We believe that this review will be of help for researchers and practitioners of AI and ML, especially those interested in computer vision and natural language processing.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.01922
Document Type :
Working Paper