1. MSTS: A Multimodal Safety Test Suite for Vision-Language Models
- Author
-
Röttger, Paul, Attanasio, Giuseppe, Friedrich, Felix, Goldzycher, Janis, Parrish, Alicia, Bhardwaj, Rishabh, Di Bonaventura, Chiara, Eng, Roman, Geagea, Gaia El Khoury, Goswami, Sujata, Han, Jieun, Hovy, Dirk, Jeong, Seogyeong, Jeretič, Paloma, Plaza-del-Arco, Flor Miriam, Rooein, Donya, Schramowski, Patrick, Shaitarova, Anastassia, Shen, Xudong, Willats, Richard, Zugarini, Andrea, and Vidgen, Bertie
- Subjects
Computer Science - Computation and Language - Abstract
Vision-language models (VLMs), which process image and text inputs, are increasingly integrated into chat assistants and other consumer AI applications. Without proper safeguards, however, VLMs may give harmful advice (e.g. how to self-harm) or encourage unsafe behaviours (e.g. to consume drugs). Despite these clear hazards, little work so far has evaluated VLM safety and the novel risks created by multimodal inputs. To address this gap, we introduce MSTS, a Multimodal Safety Test Suite for VLMs. MSTS comprises 400 test prompts across 40 fine-grained hazard categories. Each test prompt consists of a text and an image that only in combination reveal their full unsafe meaning. With MSTS, we find clear safety issues in several open VLMs. We also find some VLMs to be safe by accident, meaning that they are safe because they fail to understand even simple test prompts. We translate MSTS into ten languages, showing non-English prompts to increase the rate of unsafe model responses. We also show models to be safer when tested with text only rather than multimodal prompts. Finally, we explore the automation of VLM safety assessments, finding even the best safety classifiers to be lacking., Comment: under review
- Published
- 2025