Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi

Translated title of the contribution: Estimation of personalized facial gesture patterns

Ferda Ofli, Engin Erzin, Yücel Yemez, A. Murat Tekalp

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The goal of this framework is to learn the personalized elementary dynamic facial expression patterns for a particular speaker. We track lip, eyebrow, and eyelid of the speaker in 3D across a head-andshoulder stereo video sequence. We use MPEG-4 Facial Definition Parameters (FDPs) to create the feature set, and MPEG4 Facial Animation Parameters (FAPs) to represent the temporal facial expression patterns. Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features is performed separately to determine recurrent elementary facial expression patterns for the particular speaker. These facial expression patterns, which are coded by FAP sequences and may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

Original languageUndefined/Unknown
Title of host publication2007 IEEE 15th Signal Processing and Communications Applications, SIU
DOIs
Publication statusPublished - 1 Dec 2007
Externally publishedYes
Event2007 IEEE 15th Signal Processing and Communications Applications, SIU - Eskisehir, Turkey
Duration: 11 Jun 200713 Jun 2007

Other

Other
CountryTurkey
CityEskisehir
Period11/6/0713/6/07

Fingerprint

facial expression
Animation
Hidden Markov models
emotion
video

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Communication
  • Signal Processing

Cite this

Ofli, F., Erzin, E., Yemez, Y., & Tekalp, A. M. (2007). Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi. In 2007 IEEE 15th Signal Processing and Communications Applications, SIU [4298615] https://doi.org/10.1109/SIU.2007.4298615

Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi. / Ofli, Ferda; Erzin, Engin; Yemez, Yücel; Tekalp, A. Murat.

2007 IEEE 15th Signal Processing and Communications Applications, SIU. 2007. 4298615.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ofli, F, Erzin, E, Yemez, Y & Tekalp, AM 2007, Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi. in 2007 IEEE 15th Signal Processing and Communications Applications, SIU., 4298615, Eskisehir, Turkey, 11/6/07. https://doi.org/10.1109/SIU.2007.4298615
Ofli F, Erzin E, Yemez Y, Tekalp AM. Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi. In 2007 IEEE 15th Signal Processing and Communications Applications, SIU. 2007. 4298615 https://doi.org/10.1109/SIU.2007.4298615
Ofli, Ferda ; Erzin, Engin ; Yemez, Yücel ; Tekalp, A. Murat. / Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi. 2007 IEEE 15th Signal Processing and Communications Applications, SIU. 2007.
@inproceedings{065a76ce8bc84e6f8327c116e950dbdf,
title = "Kişiselleştirilmiş y{\"u}z jest {\"o}r{\"u}nt{\"u} lerinin kestirimi",
abstract = "We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The goal of this framework is to learn the personalized elementary dynamic facial expression patterns for a particular speaker. We track lip, eyebrow, and eyelid of the speaker in 3D across a head-andshoulder stereo video sequence. We use MPEG-4 Facial Definition Parameters (FDPs) to create the feature set, and MPEG4 Facial Animation Parameters (FAPs) to represent the temporal facial expression patterns. Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features is performed separately to determine recurrent elementary facial expression patterns for the particular speaker. These facial expression patterns, which are coded by FAP sequences and may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.",
author = "Ferda Ofli and Engin Erzin and Y{\"u}cel Yemez and Tekalp, {A. Murat}",
year = "2007",
month = "12",
day = "1",
doi = "10.1109/SIU.2007.4298615",
language = "Undefined/Unknown",
isbn = "1424407192",
booktitle = "2007 IEEE 15th Signal Processing and Communications Applications, SIU",

}

TY - GEN

T1 - Kişiselleştirilmiş yüz jest örüntü lerinin kestirimi

AU - Ofli, Ferda

AU - Erzin, Engin

AU - Yemez, Yücel

AU - Tekalp, A. Murat

PY - 2007/12/1

Y1 - 2007/12/1

N2 - We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The goal of this framework is to learn the personalized elementary dynamic facial expression patterns for a particular speaker. We track lip, eyebrow, and eyelid of the speaker in 3D across a head-andshoulder stereo video sequence. We use MPEG-4 Facial Definition Parameters (FDPs) to create the feature set, and MPEG4 Facial Animation Parameters (FAPs) to represent the temporal facial expression patterns. Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features is performed separately to determine recurrent elementary facial expression patterns for the particular speaker. These facial expression patterns, which are coded by FAP sequences and may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

AB - We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The goal of this framework is to learn the personalized elementary dynamic facial expression patterns for a particular speaker. We track lip, eyebrow, and eyelid of the speaker in 3D across a head-andshoulder stereo video sequence. We use MPEG-4 Facial Definition Parameters (FDPs) to create the feature set, and MPEG4 Facial Animation Parameters (FAPs) to represent the temporal facial expression patterns. Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features is performed separately to determine recurrent elementary facial expression patterns for the particular speaker. These facial expression patterns, which are coded by FAP sequences and may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

UR - http://www.scopus.com/inward/record.url?scp=50249149810&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=50249149810&partnerID=8YFLogxK

U2 - 10.1109/SIU.2007.4298615

DO - 10.1109/SIU.2007.4298615

M3 - Conference contribution

SN - 1424407192

SN - 9781424407194

BT - 2007 IEEE 15th Signal Processing and Communications Applications, SIU

ER -