↓ Skip to main content

Transforming an embodied conversational agent into an efficient talking head: from keyframe-based animation to multimodal concatenation synthesis

Overview of attention for article published in Computational Cognitive Science, September 2015
Altmetric Badge

Citations

dimensions_citation
2 Dimensions

Readers on

mendeley
28 Mendeley
Title
Transforming an embodied conversational agent into an efficient talking head: from keyframe-based animation to multimodal concatenation synthesis
Published in
Computational Cognitive Science, September 2015
DOI 10.1186/s40469-015-0007-8
Pubmed ID
Authors

Guillaume Gibert, Kirk N. Olsen, Yvonne Leung, Catherine J. Stevens

Abstract

Virtual humans have become part of our everyday life (movies, internet, and computer games). Even though they are becoming more and more realistic, their speech capabilities are, most of the time, limited and not coherent and/or not synchronous with the corresponding acoustic signal. We describe a method to convert a virtual human avatar (animated through key frames and interpolation) into a more naturalistic talking head. In fact, speech articulation cannot be accurately replicated using interpolation between key frames and talking heads with good speech capabilities are derived from real speech production data. Motion capture data are commonly used to provide accurate facial motion for visible speech articulators (jaw and lips) synchronous with acoustics. To access tongue trajectories (partially occluded speech articulator), electromagnetic articulography (EMA) is often used. We recorded a large database of phonetically-balanced English sentences with synchronous EMA, motion capture data, and acoustics. An articulatory model was computed on this database to recover missing data and to provide 'normalized' animation (i.e., articulatory) parameters. In addition, semi-automatic segmentation was performed on the acoustic stream. A dictionary of multimodal Australian English diphones was created. It is composed of the variation of the articulatory parameters between all the successive stable allophones. The avatar's facial key frames were converted into articulatory parameters steering its speech articulators (jaw, lips and tongue). The speech production database was used to drive the Embodied Conversational Agent (ECA) and to enhance its speech capabilities. A Text-To-Auditory Visual Speech synthesizer was created based on the MaryTTS software and on the diphone dictionary derived from the speech production database. We describe a method to transform an ECA with generic tongue model and animation by key frames into a talking head that displays naturalistic tongue, jaw and lip motions. Thanks to a multimodal speech production database, a Text-To-Auditory Visual Speech synthesizer drives the ECA's facial movements enhancing its speech capabilities.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 28 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 28 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 8 29%
Student > Ph. D. Student 4 14%
Student > Bachelor 3 11%
Researcher 2 7%
Other 1 4%
Other 3 11%
Unknown 7 25%
Readers by discipline Count As %
Computer Science 9 32%
Engineering 4 14%
Medicine and Dentistry 2 7%
Nursing and Health Professions 1 4%
Decision Sciences 1 4%
Other 3 11%
Unknown 8 29%