Back to Search Start Over

Topological Perspectives on Optimal Multimodal Embedding Spaces

Authors :
B, Abdul Aziz A.
Rahim, A. B Abdul
Publication Year :
2024

Abstract

Recent strides in multimodal model development have ignited a paradigm shift in the realm of text-to-image generation. Among these advancements, CLIP stands out as a remarkable achievement which is a sophisticated autoencoder adept at encoding both textual and visual information within a unified latent space. This paper delves into a comparative analysis between CLIP and its recent counterpart, CLOOB. To unravel the intricate distinctions within the embedding spaces crafted by these models, we employ topological data analysis. Our approach encompasses a comprehensive examination of the modality gap drivers, the clustering structures existing across both high and low dimensions, and the pivotal role that dimension collapse plays in shaping their respective embedding spaces. Empirical experiments substantiate the implications of our analyses on downstream performance across various contextual scenarios. Through this investigation, we aim to shed light on the nuanced intricacies that underlie the comparative efficacy of CLIP and CLOOB, offering insights into their respective strengths and weaknesses, and providing a foundation for further refinement and advancement in multimodal model research.<br />Comment: 10 pages, 17 figures, 2 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.18867
Document Type :
Working Paper