Back to Search Start Over

AVCap: Leveraging Audio-Visual Features as Text Tokens for Captioning

Authors :
Kim, Jongsuk
Shin, Jiwon
Kim, Junmo
Publication Year :
2024

Abstract

In recent years, advancements in representation learning and language models have propelled Automated Captioning (AC) to new heights, enabling the generation of human-level descriptions. Leveraging these advancements, we propose AVCap, an Audio-Visual Captioning framework, a simple yet powerful baseline approach applicable to audio-visual captioning. AVCap utilizes audio-visual features as text tokens, which has many advantages not only in performance but also in the extensibility and scalability of the model. AVCap is designed around three pivotal dimensions: the exploration of optimal audio-visual encoder architectures, the adaptation of pre-trained models according to the characteristics of generated text, and the investigation into the efficacy of modality fusion in captioning. Our method outperforms existing audio-visual captioning methods across all metrics and the code is available on https://github.com/JongSuk1/AVCap<br />Comment: Interspeech 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.07801
Document Type :
Working Paper