Back to Search Start Over

DPT: Deformable Patch-based Transformer for Visual Recognition

Authors :
Chen, Zhiyang
Zhu, Yousong
Zhao, Chaoyang
Hu, Guosheng
Zeng, Wei
Wang, Jinqiao
Tang, Ming
Publication Year :
2021

Abstract

Transformer has achieved great success in computer vision, while how to split patches in an image remains a problem. Existing methods usually use a fixed-size patch embedding which might destroy the semantics of objects. To address this problem, we propose a new Deformable Patch (DePatch) module which learns to adaptively split the images into patches with different positions and scales in a data-driven way rather than using predefined fixed patches. In this way, our method can well preserve the semantics in patches. The DePatch module can work as a plug-and-play module, which can easily be incorporated into different transformers to achieve an end-to-end training. We term this DePatch-embedded transformer as Deformable Patch-based Transformer (DPT) and conduct extensive evaluations of DPT on image classification and object detection. Results show DPT can achieve 81.9% top-1 accuracy on ImageNet classification, and 43.7% box mAP with RetinaNet, 44.3% with Mask R-CNN on MSCOCO object detection. Code has been made available at: https://github.com/CASIA-IVA-Lab/DPT .<br />Comment: In Proceedings of the 29th ACM International Conference on Multimedia (MM '21)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2107.14467
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3474085.3475467