Back to Search Start Over

MIMOSA: Human-AI Co-Creation of Computational Spatial Audio Effects on Videos

Authors :
Ning, Zheng
Zhang, Zheng
Ban, Jerrick
Jiang, Kaiwen
Gan, Ruohong
Tian, Yapeng
Li, Toby Jia-Jun
Ning, Zheng
Zhang, Zheng
Ban, Jerrick
Jiang, Kaiwen
Gan, Ruohong
Tian, Yapeng
Li, Toby Jia-Jun
Publication Year :
2024

Abstract

Spatial audio offers more immersive video consumption experiences to viewers; however, creating and editing spatial audio often expensive and requires specialized equipment and skills, posing a high barrier for amateur video creators. We present MIMOSA, a human-AI co-creation tool that enables amateur users to computationally generate and manipulate spatial audio effects. For a video with only monaural or stereo audio, MIMOSA automatically grounds each sound source to the corresponding sounding object in the visual scene and enables users to further validate and fix the errors in the locations of sounding objects. Users can also augment the spatial audio effect by flexibly manipulating the sounding source positions and creatively customizing the audio effect. The design of MIMOSA exemplifies a human-AI collaboration approach that, instead of utilizing state-of art end-to-end "black-box" ML models, uses a multistep pipeline that aligns its interpretable intermediate results with the user's workflow. A lab user study with 15 participants demonstrates MIMOSA's usability, usefulness, expressiveness, and capability in creating immersive spatial audio effects in collaboration with users.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438550015
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1145.3635636.3656189