Learn2Dance

Learning statistical music-to-dance mappings for choreography synthesis

Ferda Ofli, Engin Erzin, Yücel Yemez, A. Murat Tekalp

Research output: Contribution to journalArticle

10 Citations (Scopus)

Abstract

We propose a novel framework for learning many-to-many statistical mappings from musical measures to dance figures towards generating plausible music-driven dance choreographies. We obtain music-to-dance mappings through use of four statistical models: 1) musical measure models, representing a many-to-one relation, each of which associates different melody patterns to a given dance figure via a hidden Markov model (HMM); 2) exchangeable figures model, which captures the diversity in a dance performance through a one-to-many relation, extracted by unsupervised clustering of musical measure segments based on melodic similarity; 3) figure transition model, which captures the intrinsic dependencies of dance figure sequences via an $n$-gram model; 4) dance figure models, capturing the variations in the way particular dance figures are performed, by modeling the motion trajectory of each dance figure via an HMM. Based on the first three of these statistical mappings, we define a discrete HMM and synthesize alternative dance figure sequences by employing a modified Viterbi algorithm. The motion parameters of the dance figures in the synthesized choreography are then computed using the dance figure models. Finally, the generated motion parameters are animated synchronously with the musical audio using a 3-D character model. Objective and subjective evaluation results demonstrate that the proposed framework is able to produce compelling music-driven choreographies.

Original languageEnglish
Article number6112231
Pages (from-to)747-759
Number of pages13
JournalIEEE Transactions on Multimedia
Volume14
Issue number3 PART 2
DOIs
Publication statusPublished - 22 May 2012
Externally publishedYes

Fingerprint

Hidden Markov models
Viterbi algorithm
Trajectories

Keywords

  • Automatic dance choreography creation
  • multimodal dance modeling
  • music-driven dance performance synthesis and animation
  • music-to-dance mapping
  • musical measure clustering

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Media Technology
  • Computer Science Applications

Cite this

Learn2Dance : Learning statistical music-to-dance mappings for choreography synthesis. / Ofli, Ferda; Erzin, Engin; Yemez, Yücel; Tekalp, A. Murat.

In: IEEE Transactions on Multimedia, Vol. 14, No. 3 PART 2, 6112231, 22.05.2012, p. 747-759.

Research output: Contribution to journalArticle

Ofli, Ferda ; Erzin, Engin ; Yemez, Yücel ; Tekalp, A. Murat. / Learn2Dance : Learning statistical music-to-dance mappings for choreography synthesis. In: IEEE Transactions on Multimedia. 2012 ; Vol. 14, No. 3 PART 2. pp. 747-759.
@article{83c38a578e4342b2881c62d6ffd2c568,
title = "Learn2Dance: Learning statistical music-to-dance mappings for choreography synthesis",
abstract = "We propose a novel framework for learning many-to-many statistical mappings from musical measures to dance figures towards generating plausible music-driven dance choreographies. We obtain music-to-dance mappings through use of four statistical models: 1) musical measure models, representing a many-to-one relation, each of which associates different melody patterns to a given dance figure via a hidden Markov model (HMM); 2) exchangeable figures model, which captures the diversity in a dance performance through a one-to-many relation, extracted by unsupervised clustering of musical measure segments based on melodic similarity; 3) figure transition model, which captures the intrinsic dependencies of dance figure sequences via an $n$-gram model; 4) dance figure models, capturing the variations in the way particular dance figures are performed, by modeling the motion trajectory of each dance figure via an HMM. Based on the first three of these statistical mappings, we define a discrete HMM and synthesize alternative dance figure sequences by employing a modified Viterbi algorithm. The motion parameters of the dance figures in the synthesized choreography are then computed using the dance figure models. Finally, the generated motion parameters are animated synchronously with the musical audio using a 3-D character model. Objective and subjective evaluation results demonstrate that the proposed framework is able to produce compelling music-driven choreographies.",
keywords = "Automatic dance choreography creation, multimodal dance modeling, music-driven dance performance synthesis and animation, music-to-dance mapping, musical measure clustering",
author = "Ferda Ofli and Engin Erzin and Y{\"u}cel Yemez and Tekalp, {A. Murat}",
year = "2012",
month = "5",
day = "22",
doi = "10.1109/TMM.2011.2181492",
language = "English",
volume = "14",
pages = "747--759",
journal = "IEEE Transactions on Multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "3 PART 2",

}

TY - JOUR

T1 - Learn2Dance

T2 - Learning statistical music-to-dance mappings for choreography synthesis

AU - Ofli, Ferda

AU - Erzin, Engin

AU - Yemez, Yücel

AU - Tekalp, A. Murat

PY - 2012/5/22

Y1 - 2012/5/22

N2 - We propose a novel framework for learning many-to-many statistical mappings from musical measures to dance figures towards generating plausible music-driven dance choreographies. We obtain music-to-dance mappings through use of four statistical models: 1) musical measure models, representing a many-to-one relation, each of which associates different melody patterns to a given dance figure via a hidden Markov model (HMM); 2) exchangeable figures model, which captures the diversity in a dance performance through a one-to-many relation, extracted by unsupervised clustering of musical measure segments based on melodic similarity; 3) figure transition model, which captures the intrinsic dependencies of dance figure sequences via an $n$-gram model; 4) dance figure models, capturing the variations in the way particular dance figures are performed, by modeling the motion trajectory of each dance figure via an HMM. Based on the first three of these statistical mappings, we define a discrete HMM and synthesize alternative dance figure sequences by employing a modified Viterbi algorithm. The motion parameters of the dance figures in the synthesized choreography are then computed using the dance figure models. Finally, the generated motion parameters are animated synchronously with the musical audio using a 3-D character model. Objective and subjective evaluation results demonstrate that the proposed framework is able to produce compelling music-driven choreographies.

AB - We propose a novel framework for learning many-to-many statistical mappings from musical measures to dance figures towards generating plausible music-driven dance choreographies. We obtain music-to-dance mappings through use of four statistical models: 1) musical measure models, representing a many-to-one relation, each of which associates different melody patterns to a given dance figure via a hidden Markov model (HMM); 2) exchangeable figures model, which captures the diversity in a dance performance through a one-to-many relation, extracted by unsupervised clustering of musical measure segments based on melodic similarity; 3) figure transition model, which captures the intrinsic dependencies of dance figure sequences via an $n$-gram model; 4) dance figure models, capturing the variations in the way particular dance figures are performed, by modeling the motion trajectory of each dance figure via an HMM. Based on the first three of these statistical mappings, we define a discrete HMM and synthesize alternative dance figure sequences by employing a modified Viterbi algorithm. The motion parameters of the dance figures in the synthesized choreography are then computed using the dance figure models. Finally, the generated motion parameters are animated synchronously with the musical audio using a 3-D character model. Objective and subjective evaluation results demonstrate that the proposed framework is able to produce compelling music-driven choreographies.

KW - Automatic dance choreography creation

KW - multimodal dance modeling

KW - music-driven dance performance synthesis and animation

KW - music-to-dance mapping

KW - musical measure clustering

UR - http://www.scopus.com/inward/record.url?scp=84861131711&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84861131711&partnerID=8YFLogxK

U2 - 10.1109/TMM.2011.2181492

DO - 10.1109/TMM.2011.2181492

M3 - Article

VL - 14

SP - 747

EP - 759

JO - IEEE Transactions on Multimedia

JF - IEEE Transactions on Multimedia

SN - 1520-9210

IS - 3 PART 2

M1 - 6112231

ER -