1. CamoFormer: Masked Separable Attention for Camouflaged Object Detection.
- Author
-
Yin B, Zhang X, Fan DP, Jiao S, Cheng MM, Gool LV, and Hou Q
- Abstract
How to identify and segment camouflaged objects from the background is challenging. Inspired by the multi-head self-attention in Transformers, we present a simple masked separable attention (MSA) for camouflaged object detection. We first separate the multi-head self-attention into three parts, which are responsible for distinguishing the camouflaged objects from the background using different mask strategies. Furthermore, we propose to capture high-resolution semantic representations progressively based on a simple top-down decoder with the proposed MSA to attain precise segmentation results. These structures plus a backbone encoder form a new model, dubbed CamoFormer. Extensive experiments show that CamoFormer achieves new state-of-the-art performance on three widely-used camouflaged object detection benchmarks. To better evaluate the performance of the proposed CamoFormer around the border regions, we propose to use two new metrics, i.e. BR-M and BR-F. There are on average ∼ 5% relative improvements over previous methods in terms of S-measure and weighted F-measure. Our code is available at https://github.com/HVision-NKU/CamoFormer.
- Published
- 2024
- Full Text
- View/download PDF