Back to Search Start Over

MetaMGC: a music generation framework for concerts in metaverse

Authors :
Cong Jin
Fengjuan Wu
Jing Wang
Yang Liu
Zixuan Guan
Zhe Han
Source :
EURASIP Journal on Audio, Speech, and Music Processing. 2022
Publication Year :
2022
Publisher :
Springer Science and Business Media LLC, 2022.

Abstract

In recent years, there has been a national craze for metaverse concerts. However, existing meta-universe concert efforts often focus on immersive visual experiences and lack consideration of the musical and aural experience. But for concerts, it is the beautiful music and the immersive listening experience that deserve the most attention. Therefore, enhancing intelligent and immersive musical experiences is essential for the further development of the metaverse. With this in mind, we propose a metaverse concert generation framework — from intelligent music generation to stereo conversion and sound field design for virtual concert stages. First, combining the ideas of reinforcement learning and value functions, the Transformer-XL music generation network is improved and used in training all the music in the POP909 dataset. Experiments show that both improved algorithms have advantages over the original method in terms of objective evaluation and subjective evaluation metrics. In addition, this paper validates a neural rendering method that can be used to generate spatial audio based on a binaural-integrated neural network with a fully convolutional technique. And the purely data-driven end-to-end model performs to be more reliable compared with traditional spatial audio generation methods such as HRTF. Finally, we propose a metadata-based audio rendering algorithm to simulate real-world acoustic environments.

Details

ISSN :
16874722
Volume :
2022
Database :
OpenAIRE
Journal :
EURASIP Journal on Audio, Speech, and Music Processing
Accession number :
edsair.doi...........030d4e1a728a9cfe506bc8d29ea0aaa0
Full Text :
https://doi.org/10.1186/s13636-022-00261-8