Estimation and analysis of facial animation parameter patterns

Ferda Ofli, Engin Erzin, Yucel Yemez, A. Murat Tekalp

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The proposed system aims to learn personalized elementary dynamic facial expression patterns for a particular speaker. We use head-and-shoulder stereo video sequences to track lip, eye, eyebrow, and eyelid motion of a speaker in 3D. MPEG-4 Facial Definition Parameters (FDPs) are used as the feature set. and temporal facial expression patterns are represented by the MPEG-4 Facial Animation Parameters (FAPs). We perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features separately to determine recurrent elementary facial expression patterns for a particular speaker. These facial expression patterns coded by FAP sequences, which may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

Original languageEnglish
Title of host publicationProceedings - International Conference on Image Processing, ICIP
Volume4
DOIs
Publication statusPublished - 1 Dec 2006
Externally publishedYes
Event14th IEEE International Conference on Image Processing, ICIP 2007 - San Antonio, TX, United States
Duration: 16 Sep 200719 Sep 2007

Other

Other14th IEEE International Conference on Image Processing, ICIP 2007
CountryUnited States
CitySan Antonio, TX
Period16/9/0719/9/07

Fingerprint

Animation
Hidden Markov models

Keywords

  • Dynamic facial expression analysis
  • Temporal patterns

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Ofli, F., Erzin, E., Yemez, Y., & Tekalp, A. M. (2006). Estimation and analysis of facial animation parameter patterns. In Proceedings - International Conference on Image Processing, ICIP (Vol. 4). [4380012] https://doi.org/10.1109/ICIP.2007.4380012

Estimation and analysis of facial animation parameter patterns. / Ofli, Ferda; Erzin, Engin; Yemez, Yucel; Tekalp, A. Murat.

Proceedings - International Conference on Image Processing, ICIP. Vol. 4 2006. 4380012.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ofli, F, Erzin, E, Yemez, Y & Tekalp, AM 2006, Estimation and analysis of facial animation parameter patterns. in Proceedings - International Conference on Image Processing, ICIP. vol. 4, 4380012, 14th IEEE International Conference on Image Processing, ICIP 2007, San Antonio, TX, United States, 16/9/07. https://doi.org/10.1109/ICIP.2007.4380012
Ofli F, Erzin E, Yemez Y, Tekalp AM. Estimation and analysis of facial animation parameter patterns. In Proceedings - International Conference on Image Processing, ICIP. Vol. 4. 2006. 4380012 https://doi.org/10.1109/ICIP.2007.4380012
Ofli, Ferda ; Erzin, Engin ; Yemez, Yucel ; Tekalp, A. Murat. / Estimation and analysis of facial animation parameter patterns. Proceedings - International Conference on Image Processing, ICIP. Vol. 4 2006.
@inproceedings{d8851283816f44f4a19a6523a2dfac12,
title = "Estimation and analysis of facial animation parameter patterns",
abstract = "We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The proposed system aims to learn personalized elementary dynamic facial expression patterns for a particular speaker. We use head-and-shoulder stereo video sequences to track lip, eye, eyebrow, and eyelid motion of a speaker in 3D. MPEG-4 Facial Definition Parameters (FDPs) are used as the feature set. and temporal facial expression patterns are represented by the MPEG-4 Facial Animation Parameters (FAPs). We perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features separately to determine recurrent elementary facial expression patterns for a particular speaker. These facial expression patterns coded by FAP sequences, which may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.",
keywords = "Dynamic facial expression analysis, Temporal patterns",
author = "Ferda Ofli and Engin Erzin and Yucel Yemez and Tekalp, {A. Murat}",
year = "2006",
month = "12",
day = "1",
doi = "10.1109/ICIP.2007.4380012",
language = "English",
isbn = "1424414377",
volume = "4",
booktitle = "Proceedings - International Conference on Image Processing, ICIP",

}

TY - GEN

T1 - Estimation and analysis of facial animation parameter patterns

AU - Ofli, Ferda

AU - Erzin, Engin

AU - Yemez, Yucel

AU - Tekalp, A. Murat

PY - 2006/12/1

Y1 - 2006/12/1

N2 - We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The proposed system aims to learn personalized elementary dynamic facial expression patterns for a particular speaker. We use head-and-shoulder stereo video sequences to track lip, eye, eyebrow, and eyelid motion of a speaker in 3D. MPEG-4 Facial Definition Parameters (FDPs) are used as the feature set. and temporal facial expression patterns are represented by the MPEG-4 Facial Animation Parameters (FAPs). We perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features separately to determine recurrent elementary facial expression patterns for a particular speaker. These facial expression patterns coded by FAP sequences, which may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

AB - We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The proposed system aims to learn personalized elementary dynamic facial expression patterns for a particular speaker. We use head-and-shoulder stereo video sequences to track lip, eye, eyebrow, and eyelid motion of a speaker in 3D. MPEG-4 Facial Definition Parameters (FDPs) are used as the feature set. and temporal facial expression patterns are represented by the MPEG-4 Facial Animation Parameters (FAPs). We perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features separately to determine recurrent elementary facial expression patterns for a particular speaker. These facial expression patterns coded by FAP sequences, which may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

KW - Dynamic facial expression analysis

KW - Temporal patterns

UR - http://www.scopus.com/inward/record.url?scp=48149099939&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=48149099939&partnerID=8YFLogxK

U2 - 10.1109/ICIP.2007.4380012

DO - 10.1109/ICIP.2007.4380012

M3 - Conference contribution

AN - SCOPUS:48149099939

SN - 1424414377

SN - 9781424414376

VL - 4

BT - Proceedings - International Conference on Image Processing, ICIP

ER -