Back to Search
Start Over
Design, implementation and evaluation of the Czech realistic audio-visual speech synthesis
- Source :
- Signal Processing. 86:3657-3673
- Publication Year :
- 2006
- Publisher :
- Elsevier BV, 2006.
-
Abstract
- This paper presents the whole process of creation of audio-visual speech synthesis system. Such system consists of two main parts, the acoustic synthesis emulating human speech and the facial animation emulating the human lip articulation. The acoustic subsystem is based on concatenation-based speech synthesis. The visual subsystem is designed as a realistic, fully three-dimensional parametrically controllable facial animation model. To be able to parametrically control the animation to emulate human articulation, the set of visual parameters has to be obtained for all basic speech units. To provide realistic animation, the database of lip movements of a real person need to be recorded and expressed by suitable parameterization. The set of control parameters for visual animation is then derived from this database. The 3D model of a head based on a head of a real person also makes the animation more realistic. To obtain such model, a 3D scanning of a real person has to be adopted. We present the design and implementation of above-mentioned process. The aim is to obtain realistic audio-visual speech synthesis with possibility to change the 3D head model according to particular person. The design, acquisition and processing of audio-visual speech corpus for such purpose is presented. Next, the process of both acoustic and visual speech synthesis is described. The visual speech synthesis comprises the tasks of model training, animation control, and co-articulation modelling. A facial animation can also increase intelligibility of a telephone speech to people with hearing disabilities. In such case the textual information to control the animation is not available. Solution to the problem of mapping visual parameters from speech signal either directly or through recognized text is presented. Furthermore, the 3D scanning algorithm is presented. It allows to obtain realistic 3D model based on a head of a real person and thus to personalize the talking head. In the end of this paper, evaluation of intelligibility of the presented audio-visual speech synthesis and its possible applications are presented.
- Subjects :
- Facial expression
Computer science
Speech recognition
Speech corpus
Speech synthesis
Animation
Intelligibility (communication)
computer.software_genre
Speech processing
Control and Systems Engineering
Signal Processing
Computer Vision and Pattern Recognition
Electrical and Electronic Engineering
Audio signal processing
computer
Software
Computer facial animation
Subjects
Details
- ISSN :
- 01651684
- Volume :
- 86
- Database :
- OpenAIRE
- Journal :
- Signal Processing
- Accession number :
- edsair.doi...........0ec0a86f0c7716cd1ef9fec45c50c4e2
- Full Text :
- https://doi.org/10.1016/j.sigpro.2006.02.039