Back to Search Start Over

Drop the beat! Freestyler for Accompaniment Conditioned Rapping Voice Generation

Authors :
Ning, Ziqian
Wang, Shuai
Jiang, Yuepeng
Yao, Jixun
He, Lei
Pan, Shifeng
Ding, Jie
Xie, Lei
Publication Year :
2024

Abstract

Rap, a prominent genre of vocal performance, remains underexplored in vocal generation. General vocal synthesis depends on precise note and duration inputs, requiring users to have related musical knowledge, which limits flexibility. In contrast, rap typically features simpler melodies, with a core focus on a strong rhythmic sense that harmonizes with accompanying beats. In this paper, we propose Freestyler, the first system that generates rapping vocals directly from lyrics and accompaniment inputs. Freestyler utilizes language model-based token generation, followed by a conditional flow matching model to produce spectrograms and a neural vocoder to restore audio. It allows a 3-second prompt to enable zero-shot timbre control. Due to the scarcity of publicly available rap datasets, we also present RapBank, a rap song dataset collected from the internet, alongside a meticulously designed processing pipeline. Experimental results show that Freestyler produces high-quality rapping voice generation with enhanced naturalness and strong alignment with accompanying beats, both stylistically and rhythmically.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.15474
Document Type :
Working Paper