Back to Search Start Over

PaliGemma 2: A Family of Versatile VLMs for Transfer

Authors :
Steiner, Andreas
Pinto, André Susano
Tschannen, Michael
Keysers, Daniel
Wang, Xiao
Bitton, Yonatan
Gritsenko, Alexey
Minderer, Matthias
Sherbondy, Anthony
Long, Shangbang
Qin, Siyang
Ingle, Reeve
Bugliarello, Emanuele
Kazemzadeh, Sahar
Mesnard, Thomas
Alabdulmohsin, Ibrahim
Beyer, Lucas
Zhai, Xiaohua
Publication Year :
2024

Abstract

PaliGemma 2 is an upgrade of the PaliGemma open Vision-Language Model (VLM) based on the Gemma 2 family of language models. We combine the SigLIP-So400m vision encoder that was also used by PaliGemma with the whole range of Gemma 2 models, from the 2B one all the way up to the 27B model. We train these models at three resolutions (224px, 448px, and 896px) in multiple stages to equip them with broad knowledge for transfer via fine-tuning. The resulting family of base models covering different model sizes and resolutions allows us to investigate factors impacting transfer performance (such as learning rate) and to analyze the interplay between the type of task, model size, and resolution. We further increase the number and breadth of transfer tasks beyond the scope of PaliGemma including different OCR-related tasks such as table structure recognition, molecular structure recognition, music score recognition, as well as long fine-grained captioning and radiography report generation, on which PaliGemma 2 obtains state-of-the-art results.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.03555
Document Type :
Working Paper