Back to Search
Start Over
Supervised Symbolic Music Style Translation Using Synthetic Data
- Source :
- Proceedings of the 20th International Society for Music Information Retrieval Conference (2019) 588-595
- Publication Year :
- 2019
-
Abstract
- Research on style transfer and domain translation has clearly demonstrated the ability of deep learning-based algorithms to manipulate images in terms of artistic style. More recently, several attempts have been made to extend such approaches to music (both symbolic and audio) in order to enable transforming musical style in a similar manner. In this study, we focus on symbolic music with the goal of altering the 'style' of a piece while keeping its original 'content'. As opposed to the current methods, which are inherently restricted to be unsupervised due to the lack of 'aligned' data (i.e. the same musical piece played in multiple styles), we develop the first fully supervised algorithm for this task. At the core of our approach lies a synthetic data generation scheme which allows us to produce virtually unlimited amounts of aligned data, and hence avoid the above issue. In view of this data generation scheme, we propose an encoder-decoder model for translating symbolic music accompaniments between a number of different styles. Our experiments show that our models, although trained entirely on synthetic data, are capable of producing musically meaningful accompaniments even for real (non-synthetic) MIDI recordings.<br />Comment: ISMIR 2019 camera-ready
Details
- Database :
- arXiv
- Journal :
- Proceedings of the 20th International Society for Music Information Retrieval Conference (2019) 588-595
- Publication Type :
- Report
- Accession number :
- edsarx.1907.02265
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.5281/zenodo.3527878