1. One-Stage Detection Model Based on Swin Transformer
- Author
-
Tae Yang Kim, Asim Niaz, Jung Sik Choi, and Kwang Nam Choi
- Subjects
Attention ,computer-vision ,object detection ,transformer network ,single-stage detection ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Object detection using vision transformers (ViTs) has recently garnered considerable research interest. Vision Transformers execute image classification through a multi-head attention-based MLP head and post-image segmentation into patches. However, conventional models prioritize object classification over predicting bounding boxes crucial for precise object detection. To address this gap, a two-stage detector has been devised based on Transformers, which initially extracts feature maps via a pre-trained CNN model. In contrast, our research introduces a one-stage object detector founded on the Swin-Transformer architecture. This one-stage detector adeptly performs simultaneous object classification and bounding box prediction employing a pure Swin-Transformer Encoder Block, obviating the need for a pre-trained CNN model. Our proposed model is trained, validated, and evaluated on the COCO dataset comprising 82,783 training images, 40,504 validation images, and 40,775 test images. The proposed model showed average precision (AP) 30.2% performance improvement by 5.59% compared to the performance evaluation of the existing ViT-based 1-stage detector.
- Published
- 2024
- Full Text
- View/download PDF