Back to Search Start Over

Transformers in computational visual media: A survey

Authors :
Yifan Xu
Huapeng Wei
Minxuan Lin
Yingying Deng
Kekai Sheng
Mengdan Zhang
Fan Tang
Weiming Dong
Feiyue Huang
Changsheng Xu
Source :
Computational Visual Media, Vol 8, Iss 1, Pp 33-62 (2021)
Publication Year :
2021
Publisher :
SpringerOpen, 2021.

Abstract

Abstract Transformers, the dominant architecture for natural language processing, have also recently attracted much attention from computational visual media researchers due to their capacity for long-range representation and high performance. Transformers are sequence-to-sequence models, which use a self-attention mechanism rather than the RNN sequential structure. Thus, such models can be trained in parallel and can represent global information. This study comprehensively surveys recent visual transformer works. We categorize them according to task scenario: backbone design, high-level vision, low-level vision and generation, and multimodal learning. Their key ideas are also analyzed. Differing from previous surveys, we mainly focus on visual transformer methods in low-level vision and generation. The latest works on backbone design are also reviewed in detail. For ease of understanding, we precisely describe the main contributions of the latest works in the form of tables. As well as giving quantitative comparisons, we also present image results for low-level vision and generation tasks. Computational costs and source code links for various important works are also given in this survey to assist further development.

Details

Language :
English
ISSN :
20960433 and 20960662
Volume :
8
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Computational Visual Media
Publication Type :
Academic Journal
Accession number :
edsdoj.01a2c8e47a144a68732982eec7751f7
Document Type :
article
Full Text :
https://doi.org/10.1007/s41095-021-0247-3