Back to Search Start Over

Revealing Multimodal Contrastive Representation Learning through Latent Partial Causal Models

Authors :
Liu, Yuhang
Zhang, Zhen
Gong, Dong
Huang, Biwei
Gong, Mingming
Hengel, Anton van den
Zhang, Kun
Shi, Javen Qinfeng
Liu, Yuhang
Zhang, Zhen
Gong, Dong
Huang, Biwei
Gong, Mingming
Hengel, Anton van den
Zhang, Kun
Shi, Javen Qinfeng
Publication Year :
2024

Abstract

Multimodal contrastive representation learning methods have proven successful across a range of domains, partly due to their ability to generate meaningful shared representations of complex phenomena. To enhance the depth of analysis and understanding of these acquired representations, we introduce a unified causal model specifically designed for multimodal data. By examining this model, we show that multimodal contrastive representation learning excels at identifying latent coupled variables within the proposed unified model, up to linear or permutation transformations resulting from different assumptions. Our findings illuminate the potential of pre-trained multimodal models, eg, CLIP, in learning disentangled representations through a surprisingly simple yet highly effective tool: linear independent component analysis. Experiments demonstrate the robustness of our findings, even when the assumptions are violated, and validate the effectiveness of the proposed method in learning disentangled representations.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438524555
Document Type :
Electronic Resource