Back to Search Start Over

VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolution

Authors :
Hall, Siobhan Mackenzie
Abrantes, Fernanda Gonçalves
Zhu, Hanwen
Sodunke, Grace
Shtedritski, Aleksandar
Kirk, Hannah Rose
Publication Year :
2023

Abstract

We introduce VisoGender, a novel dataset for benchmarking gender bias in vision-language models. We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas, where each image is associated with a caption containing a pronoun relationship of subjects and objects in the scene. VisoGender is balanced by gender representation in professional roles, supporting bias evaluation in two ways: i) resolution bias, where we evaluate the difference between pronoun resolution accuracies for image subjects with gender presentations perceived as masculine versus feminine by human annotators and ii) retrieval bias, where we compare ratios of professionals perceived to have masculine and feminine gender presentations retrieved for a gender-neutral search query. We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes. While the direction and magnitude of gender bias depends on the task and the model being evaluated, captioning models are generally less biased than Vision-Language Encoders. Dataset and code are available at https://github.com/oxai/visogender<br />Comment: NeurIPS Datasets and Benchmarks 2023. Data and code available at https://github.com/oxai/visogender

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.12424
Document Type :
Working Paper