Back to Search Start Over

Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?

Authors :
Zhao, Bowen
Dirac, Leo Parker
Varshavskaya, Paulina
Publication Year :
2024

Abstract

Large vision-language models (VLMs) have become state-of-the-art for many computer vision tasks, with in-context learning (ICL) as a popular adaptation strategy for new ones. But can VLMs learn novel concepts purely from visual demonstrations, or are they limited to adapting to the output format of ICL examples? We propose a new benchmark we call Spatial Visual Ambiguity Tasks (SVAT) that challenges state-of-the-art VLMs to learn new visuospatial tasks in-context. We find that VLMs fail to do this zero-shot, and sometimes continue to fail after finetuning. However, adding simpler data to the training by curriculum learning leads to improved ICL performance.<br />Comment: 13 pages, 4 figures. Code released at https://github.com/groundlight/vlm-visual-demonstrations

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.17080
Document Type :
Working Paper