Back to Search Start Over

Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training

Authors :
Wang, Shijian
Song, Linxin
Zhang, Jieyu
Shimizu, Ryotaro
Luo, Ao
Yao, Li
Chen, Cunjian
McAuley, Julian
Wu, Hanqian
Publication Year :
2024

Abstract

Current multimodal language models (MLMs) evaluation and training approaches overlook the influence of instruction format, presenting an elephant-in-the-room problem. Previous research deals with this problem by manually crafting instructions, failing to yield significant insights due to limitations in diversity and scalability. In this work, we propose a programmatic instruction template generator capable of producing over 39B unique template combinations by filling randomly sampled positional synonyms into weighted sampled meta templates, enabling us to comprehensively examine the MLM's performance across diverse instruction templates. Our experiments across eight common MLMs on five benchmark datasets reveal that MLMs have high template sensitivities with at most 29% performance gaps between different templates. We further augment the instruction tuning dataset of LLaVA-1.5 with our template generator and perform instruction tuning on LLaVA-1.5-7B and LLaVA-1.5-13B. Models tuned on our augmented dataset achieve the best overall performance when compared with the same scale MLMs tuned on at most 75 times the scale of our augmented dataset, highlighting the importance of instruction templates in MLM training. The code is available at https://github.com/shijian2001/TemplateMatters .<br />Comment: Code: https://github.com/shijian2001/TemplateMatters

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.08307
Document Type :
Working Paper