Back to Search Start Over

The Solution for the 5th GCAIAC Zero-shot Referring Expression Comprehension Challenge

Authors :
Huang, Longfei
Yu, Feng
Guan, Zhihao
Wan, Zhonghua
Yang, Yang
Publication Year :
2024

Abstract

This report presents a solution for the zero-shot referring expression comprehension task. Visual-language multimodal base models (such as CLIP, SAM) have gained significant attention in recent years as a cornerstone of mainstream research. One of the key applications of multimodal base models lies in their ability to generalize to zero-shot downstream tasks. Unlike traditional referring expression comprehension, zero-shot referring expression comprehension aims to apply pre-trained visual-language models directly to the task without specific training. Recent studies have enhanced the zero-shot performance of multimodal base models in referring expression comprehension tasks by introducing visual prompts. To address the zero-shot referring expression comprehension challenge, we introduced a combination of visual prompts and considered the influence of textual prompts, employing joint prediction tailored to the data characteristics. Ultimately, our approach achieved accuracy rates of 84.825 on the A leaderboard and 71.460 on the B leaderboard, securing the first position.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.04998
Document Type :
Working Paper