Back to Search Start Over

Dual Domain-Adversarial Learning for Audio-Visual Saliency Prediction

Authors :
Fan, Yingzi
Han, Longfei
Zhang, Yue
Cheng, Lechao
Xia, Chen
Hu, Di
Publication Year :
2022

Abstract

Both visual and auditory information are valuable to determine the salient regions in videos. Deep convolution neural networks (CNN) showcase strong capacity in coping with the audio-visual saliency prediction task. Due to various factors such as shooting scenes and weather, there often exists moderate distribution discrepancy between source training data and target testing data. The domain discrepancy induces to performance degradation on target testing data for CNN models. This paper makes an early attempt to tackle the unsupervised domain adaptation problem for audio-visual saliency prediction. We propose a dual domain-adversarial learning algorithm to mitigate the domain discrepancy between source and target data. First, a specific domain discrimination branch is built up for aligning the auditory feature distributions. Then, those auditory features are fused into the visual features through a cross-modal self-attention module. The other domain discrimination branch is devised to reduce the domain discrepancy of visual features and audio-visual correlations implied by the fused audio-visual features. Experiments on public benchmarks demonstrate that our method can relieve the performance degradation caused by domain discrepancy.<br />Comment: Accepted by ACM MM workshop 2022(HCMA2022)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2208.05220
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3552458.3556447