Back to Search Start Over

Parallel Pre-trained Transformers (PPT) for Synthetic Data-based Instance Segmentation

Authors :
Li, Ming
Wu, Jie
Cai, Jinhang
Qin, Jie
Ren, Yuxi
Xiao, Xuefeng
Zheng, Min
Wang, Rui
Pan, Xin
Publication Year :
2022

Abstract

Recently, Synthetic data-based Instance Segmentation has become an exceedingly favorable optimization paradigm since it leverages simulation rendering and physics to generate high-quality image-annotation pairs. In this paper, we propose a Parallel Pre-trained Transformers (PPT) framework to accomplish the synthetic data-based Instance Segmentation task. Specifically, we leverage the off-the-shelf pre-trained vision Transformers to alleviate the gap between natural and synthetic data, which helps to provide good generalization in the downstream synthetic data scene with few samples. Swin-B-based CBNet V2, SwinL-based CBNet V2 and Swin-L-based Uniformer are employed for parallel feature learning, and the results of these three models are fused by pixel-level Non-maximum Suppression (NMS) algorithm to obtain more robust results. The experimental results reveal that PPT ranks first in the CVPR2022 AVA Accessibility Vision and Autonomy Challenge, with a 65.155% mAP.<br />Comment: The solution of 1st Place in AVA Accessibility Vision and Autonomy Challenge on CVPR 2022 workshop. Website: https://accessibility-cv.github.io/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.10845
Document Type :
Working Paper