Multi-modal analysis of dance performances for music-driven choreography synthesis

Ferda Ofli, E. Erzin, Y. Yemez, A. M. Tekalp

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

We propose a framework for modeling, analysis, annotation and synthesis of multi-modal dance performances. We analyze correlations between music features and dance figure labels on training dance videos in order to construct a mapping from music measures (segments) to dance figures towards generating music-driven dance choreographies. We assume that dance figure segment boundaries coincide with music measures (audio boundaries). For each training video, figure segments are manually labeled by an expert to indicate the type of dance motion. Chroma features of each measure are used for music analysis. We model temporal statistics of such chroma features corresponding to each dance figure label to identify different rhythmic patterns for that dance motion. The correlations between dance figures and music measures, as well as, correlations between consecutive dance figures are used to construct a mapping for musicdriven dance choreography synthesis. Experimental results demonstrate the success of proposed music-driven choreography synthesis framework.

Original languageEnglish
Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Pages2466-2469
Number of pages4
DOIs
Publication statusPublished - 8 Nov 2010
Externally publishedYes
Event2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Dallas, TX, United States
Duration: 14 Mar 201019 Mar 2010

Other

Other2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010
CountryUnited States
CityDallas, TX
Period14/3/1019/3/10

Fingerprint

Modal analysis
Labels
Statistics

Keywords

  • Multimodal dance modeling
  • Music-driven dance choreography synthesis

ASJC Scopus subject areas

  • Signal Processing
  • Software
  • Electrical and Electronic Engineering

Cite this

Ofli, F., Erzin, E., Yemez, Y., & Tekalp, A. M. (2010). Multi-modal analysis of dance performances for music-driven choreography synthesis. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp. 2466-2469). [5494891] https://doi.org/10.1109/ICASSP.2010.5494891

Multi-modal analysis of dance performances for music-driven choreography synthesis. / Ofli, Ferda; Erzin, E.; Yemez, Y.; Tekalp, A. M.

ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2010. p. 2466-2469 5494891.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ofli, F, Erzin, E, Yemez, Y & Tekalp, AM 2010, Multi-modal analysis of dance performances for music-driven choreography synthesis. in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings., 5494891, pp. 2466-2469, 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010, Dallas, TX, United States, 14/3/10. https://doi.org/10.1109/ICASSP.2010.5494891
Ofli F, Erzin E, Yemez Y, Tekalp AM. Multi-modal analysis of dance performances for music-driven choreography synthesis. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2010. p. 2466-2469. 5494891 https://doi.org/10.1109/ICASSP.2010.5494891
Ofli, Ferda ; Erzin, E. ; Yemez, Y. ; Tekalp, A. M. / Multi-modal analysis of dance performances for music-driven choreography synthesis. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2010. pp. 2466-2469
@inproceedings{f6deb6fab19e4e41a7b47ac607462fd2,
title = "Multi-modal analysis of dance performances for music-driven choreography synthesis",
abstract = "We propose a framework for modeling, analysis, annotation and synthesis of multi-modal dance performances. We analyze correlations between music features and dance figure labels on training dance videos in order to construct a mapping from music measures (segments) to dance figures towards generating music-driven dance choreographies. We assume that dance figure segment boundaries coincide with music measures (audio boundaries). For each training video, figure segments are manually labeled by an expert to indicate the type of dance motion. Chroma features of each measure are used for music analysis. We model temporal statistics of such chroma features corresponding to each dance figure label to identify different rhythmic patterns for that dance motion. The correlations between dance figures and music measures, as well as, correlations between consecutive dance figures are used to construct a mapping for musicdriven dance choreography synthesis. Experimental results demonstrate the success of proposed music-driven choreography synthesis framework.",
keywords = "Multimodal dance modeling, Music-driven dance choreography synthesis",
author = "Ferda Ofli and E. Erzin and Y. Yemez and Tekalp, {A. M.}",
year = "2010",
month = "11",
day = "8",
doi = "10.1109/ICASSP.2010.5494891",
language = "English",
isbn = "9781424442966",
pages = "2466--2469",
booktitle = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",

}

TY - GEN

T1 - Multi-modal analysis of dance performances for music-driven choreography synthesis

AU - Ofli, Ferda

AU - Erzin, E.

AU - Yemez, Y.

AU - Tekalp, A. M.

PY - 2010/11/8

Y1 - 2010/11/8

N2 - We propose a framework for modeling, analysis, annotation and synthesis of multi-modal dance performances. We analyze correlations between music features and dance figure labels on training dance videos in order to construct a mapping from music measures (segments) to dance figures towards generating music-driven dance choreographies. We assume that dance figure segment boundaries coincide with music measures (audio boundaries). For each training video, figure segments are manually labeled by an expert to indicate the type of dance motion. Chroma features of each measure are used for music analysis. We model temporal statistics of such chroma features corresponding to each dance figure label to identify different rhythmic patterns for that dance motion. The correlations between dance figures and music measures, as well as, correlations between consecutive dance figures are used to construct a mapping for musicdriven dance choreography synthesis. Experimental results demonstrate the success of proposed music-driven choreography synthesis framework.

AB - We propose a framework for modeling, analysis, annotation and synthesis of multi-modal dance performances. We analyze correlations between music features and dance figure labels on training dance videos in order to construct a mapping from music measures (segments) to dance figures towards generating music-driven dance choreographies. We assume that dance figure segment boundaries coincide with music measures (audio boundaries). For each training video, figure segments are manually labeled by an expert to indicate the type of dance motion. Chroma features of each measure are used for music analysis. We model temporal statistics of such chroma features corresponding to each dance figure label to identify different rhythmic patterns for that dance motion. The correlations between dance figures and music measures, as well as, correlations between consecutive dance figures are used to construct a mapping for musicdriven dance choreography synthesis. Experimental results demonstrate the success of proposed music-driven choreography synthesis framework.

KW - Multimodal dance modeling

KW - Music-driven dance choreography synthesis

UR - http://www.scopus.com/inward/record.url?scp=78049410117&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78049410117&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2010.5494891

DO - 10.1109/ICASSP.2010.5494891

M3 - Conference contribution

AN - SCOPUS:78049410117

SN - 9781424442966

SP - 2466

EP - 2469

BT - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

ER -