Benchmarking Speech Synchronized Facial Animation Based on Context-Dependent Visemes

Date issued

2007

Journal Title

Journal ISSN

Volume Title

Publisher

Václav Skala - UNION Agency

Abstract

In this paper we evaluate the effectiveness in conveying speech information of a speech synchronized facial animation system based on context-dependent visemes. The evaluation procedure is based on an oral speech intelligibility test conducted with, and without, supplementary visual information provided by a real and a virtual speaker. Three situations (audio-only, audio+video and audio+animation) are compared and analysed under five different conditions of noise contamination of the audio signal. The results show that the virtual face driven by context-dependent visemes effectively contributes to speech intelligibility at high noise degradation levels (Signal to Noise Ratio (SNR) £ -18dB).

Description

Subject(s)

obličejová animace, hodnocení, test srozumitelnosti řeči

Citation

WSCG '2007: Full Papers Proceedings: The 15th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2007 in co-operation with EUROGRAPHICS: University of West Bohemia Plzen Czech Republic, January 29 – February 1, 2007, p. 105-112.