1. Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective
- Author
-
Xie, Shenghao, Zu, Wenqiang, Zhao, Mingyang, Su, Duo, Liu, Shilong, Shi, Ruohua, Li, Guoqi, Zhang, Shanghang, and Ma, Lei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Autoregression in large language models (LLMs) has shown impressive scalability by unifying all language tasks into the next token prediction paradigm. Recently, there is a growing interest in extending this success to vision foundation models. In this survey, we review the recent advances and discuss future directions for autoregressive vision foundation models. First, we present the trend for next generation of vision foundation models, i.e., unifying both understanding and generation in vision tasks. We then analyze the limitations of existing vision foundation models, and present a formal definition of autoregression with its advantages. Later, we categorize autoregressive vision foundation models from their vision tokenizers and autoregression backbones. Finally, we discuss several promising research challenges and directions. To the best of our knowledge, this is the first survey to comprehensively summarize autoregressive vision foundation models under the trend of unifying understanding and generation. A collection of related resources is available at https://github.com/EmmaSRH/ARVFM., Comment: 17 pages, 1 table, 2 figures
- Published
- 2024