Audio-driven human body motion analysis and synthesis

Ferda Ofli, C. Canton-Ferrer, J. Tilmanne, Y. Demir, E. Bozkurt, Y. Yemez, E. Erzin, A. M. Tekalp

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.

Original languageEnglish
Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Pages2233-2236
Number of pages4
DOIs
Publication statusPublished - 16 Sep 2008
Externally publishedYes
Event2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP - Las Vegas, NV, United States
Duration: 31 Mar 20084 Apr 2008

Other

Other2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP
CountryUnited States
CityLas Vegas, NV
Period31/3/084/4/08

Fingerprint

human body
Video recording
synthesis
posture
Annealing
markers
synchronism
repetition
recording
annealing
Motion analysis

Keywords

  • Audio-driven body motion synthesis
  • Dancing avatar animation
  • Multicamera motion capture

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Acoustics and Ultrasonics

Cite this

Ofli, F., Canton-Ferrer, C., Tilmanne, J., Demir, Y., Bozkurt, E., Yemez, Y., ... Tekalp, A. M. (2008). Audio-driven human body motion analysis and synthesis. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp. 2233-2236). [4518089] https://doi.org/10.1109/ICASSP.2008.4518089

Audio-driven human body motion analysis and synthesis. / Ofli, Ferda; Canton-Ferrer, C.; Tilmanne, J.; Demir, Y.; Bozkurt, E.; Yemez, Y.; Erzin, E.; Tekalp, A. M.

ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2008. p. 2233-2236 4518089.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ofli, F, Canton-Ferrer, C, Tilmanne, J, Demir, Y, Bozkurt, E, Yemez, Y, Erzin, E & Tekalp, AM 2008, Audio-driven human body motion analysis and synthesis. in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings., 4518089, pp. 2233-2236, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Las Vegas, NV, United States, 31/3/08. https://doi.org/10.1109/ICASSP.2008.4518089
Ofli F, Canton-Ferrer C, Tilmanne J, Demir Y, Bozkurt E, Yemez Y et al. Audio-driven human body motion analysis and synthesis. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2008. p. 2233-2236. 4518089 https://doi.org/10.1109/ICASSP.2008.4518089
Ofli, Ferda ; Canton-Ferrer, C. ; Tilmanne, J. ; Demir, Y. ; Bozkurt, E. ; Yemez, Y. ; Erzin, E. ; Tekalp, A. M. / Audio-driven human body motion analysis and synthesis. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2008. pp. 2233-2236
@inproceedings{afc045b36a5e4be49d19c86b31e22953,
title = "Audio-driven human body motion analysis and synthesis",
abstract = "This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.",
keywords = "Audio-driven body motion synthesis, Dancing avatar animation, Multicamera motion capture",
author = "Ferda Ofli and C. Canton-Ferrer and J. Tilmanne and Y. Demir and E. Bozkurt and Y. Yemez and E. Erzin and Tekalp, {A. M.}",
year = "2008",
month = "9",
day = "16",
doi = "10.1109/ICASSP.2008.4518089",
language = "English",
isbn = "1424414849",
pages = "2233--2236",
booktitle = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",

}

TY - GEN

T1 - Audio-driven human body motion analysis and synthesis

AU - Ofli, Ferda

AU - Canton-Ferrer, C.

AU - Tilmanne, J.

AU - Demir, Y.

AU - Bozkurt, E.

AU - Yemez, Y.

AU - Erzin, E.

AU - Tekalp, A. M.

PY - 2008/9/16

Y1 - 2008/9/16

N2 - This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.

AB - This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.

KW - Audio-driven body motion synthesis

KW - Dancing avatar animation

KW - Multicamera motion capture

UR - http://www.scopus.com/inward/record.url?scp=51449089854&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=51449089854&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2008.4518089

DO - 10.1109/ICASSP.2008.4518089

M3 - Conference contribution

SN - 1424414849

SN - 9781424414840

SP - 2233

EP - 2236

BT - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

ER -