1. A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images
- Author
-
Lim, Yongwan, Toutios, Asterios, Bliesener, Yannick, Tian, Ye, Lingala, Sajan Goud, Vaz, Colin, Sorensen, Tanner, Oh, Miran, Harper, Sarah, Chen, Weiyi, Lee, Yoonjeong, Töger, Johannes, Montesserin, Mairym Lloréns, Smith, Caitlin, Godinez, Bianca, Goldstein, Louis, Byrd, Dani, Nayak, Krishna S., and Narayanan, Shrikanth S.
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Real-time magnetic resonance imaging (RT-MRI) of human speech production is enabling significant advances in speech science, linguistics, bio-inspired speech technology development, and clinical applications. Easy access to RT-MRI is however limited, and comprehensive datasets with broad access are needed to catalyze research across numerous domains. The imaging of the rapidly moving articulators and dynamic airway shaping during speech demands high spatio-temporal resolution and robust reconstruction methods. Further, while reconstructed images have been published, to-date there is no open dataset providing raw multi-coil RT-MRI data from an optimized speech production experimental setup. Such datasets could enable new and improved methods for dynamic image reconstruction, artifact correction, feature extraction, and direct extraction of linguistically-relevant biomarkers. The present dataset offers a unique corpus of 2D sagittal-view RT-MRI videos along with synchronized audio for 75 subjects performing linguistically motivated speech tasks, alongside the corresponding first-ever public domain raw RT-MRI data. The dataset also includes 3D volumetric vocal tract MRI during sustained speech sounds and high-resolution static anatomical T2-weighted upper airway MRI for each subject., Comment: 27 pages, 6 figures, 5 tables, submitted to Nature Scientific Data
- Published
- 2021
- Full Text
- View/download PDF