Back to Search Start Over

You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection

Authors :
Fang, Yuxin
Liao, Bencheng
Wang, Xinggang
Fang, Jiemin
Qi, Jiyang
Wu, Rui
Niu, Jianwei
Liu, Wenyu
Publication Year :
2021

Abstract

Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. Code and pre-trained models are available at https://github.com/hustvl/YOLOS.<br />Comment: NeurIPS 2021 Camera Ready

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.00666
Document Type :
Working Paper