Learning-Based Video Motion Magnification

Tae Hyun Oh, Ronnachai Jaroensri, Changil Kim, Mohamed Elgharib, Frédo Durand, William T. Freeman, Wojciech Matusik

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Video motion magnification techniques allow us to see small motions previously invisible to the naked eyes, such as those of vibrating airplane wings, or swaying buildings under the influence of the wind. Because the motion is small, the magnification results are prone to noise or excessive blurring. The state of the art relies on hand-designed filters to extract representations that may not be optimal. In this paper, we seek to learn the filters directly from examples using deep convolutional neural networks. To make training tractable, we carefully design a synthetic dataset that captures small motion well, and use two-frame input for training. We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods. While our model is not trained with temporal filters, we found that the temporal filters can be used with our extracted representations up to a moderate magnification, enabling a frequency-based motion selection. Finally, we analyze the learned filters and show that they behave similarly to the derivative filters used in previous works. Our code, trained model, and datasets will be available online.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings
EditorsVittorio Ferrari, Cristian Sminchisescu, Yair Weiss, Martial Hebert
PublisherSpringer Verlag
Pages663-679
Number of pages17
ISBN (Print)9783030012243
DOIs
Publication statusPublished - 1 Jan 2018
Event15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany
Duration: 8 Sep 201814 Sep 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11208 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other15th European Conference on Computer Vision, ECCV 2018
CountryGermany
CityMunich
Period8/9/1814/9/18

Fingerprint

Filter
Motion
Aircraft
Derivatives
Neural networks
Learning
Neural Networks
Derivative
Model
Training

Keywords

  • Deep convolutional neural network
  • Motion magnification
  • Motion manipulation

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Oh, T. H., Jaroensri, R., Kim, C., Elgharib, M., Durand, F., Freeman, W. T., & Matusik, W. (2018). Learning-Based Video Motion Magnification. In V. Ferrari, C. Sminchisescu, Y. Weiss, & M. Hebert (Eds.), Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings (pp. 663-679). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11208 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-01225-0_39

Learning-Based Video Motion Magnification. / Oh, Tae Hyun; Jaroensri, Ronnachai; Kim, Changil; Elgharib, Mohamed; Durand, Frédo; Freeman, William T.; Matusik, Wojciech.

Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. ed. / Vittorio Ferrari; Cristian Sminchisescu; Yair Weiss; Martial Hebert. Springer Verlag, 2018. p. 663-679 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11208 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Oh, TH, Jaroensri, R, Kim, C, Elgharib, M, Durand, F, Freeman, WT & Matusik, W 2018, Learning-Based Video Motion Magnification. in V Ferrari, C Sminchisescu, Y Weiss & M Hebert (eds), Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11208 LNCS, Springer Verlag, pp. 663-679, 15th European Conference on Computer Vision, ECCV 2018, Munich, Germany, 8/9/18. https://doi.org/10.1007/978-3-030-01225-0_39
Oh TH, Jaroensri R, Kim C, Elgharib M, Durand F, Freeman WT et al. Learning-Based Video Motion Magnification. In Ferrari V, Sminchisescu C, Weiss Y, Hebert M, editors, Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. Springer Verlag. 2018. p. 663-679. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-01225-0_39
Oh, Tae Hyun ; Jaroensri, Ronnachai ; Kim, Changil ; Elgharib, Mohamed ; Durand, Frédo ; Freeman, William T. ; Matusik, Wojciech. / Learning-Based Video Motion Magnification. Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. editor / Vittorio Ferrari ; Cristian Sminchisescu ; Yair Weiss ; Martial Hebert. Springer Verlag, 2018. pp. 663-679 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{ddc036515bcd4b7ab5af6699e87ffe5b,
title = "Learning-Based Video Motion Magnification",
abstract = "Video motion magnification techniques allow us to see small motions previously invisible to the naked eyes, such as those of vibrating airplane wings, or swaying buildings under the influence of the wind. Because the motion is small, the magnification results are prone to noise or excessive blurring. The state of the art relies on hand-designed filters to extract representations that may not be optimal. In this paper, we seek to learn the filters directly from examples using deep convolutional neural networks. To make training tractable, we carefully design a synthetic dataset that captures small motion well, and use two-frame input for training. We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods. While our model is not trained with temporal filters, we found that the temporal filters can be used with our extracted representations up to a moderate magnification, enabling a frequency-based motion selection. Finally, we analyze the learned filters and show that they behave similarly to the derivative filters used in previous works. Our code, trained model, and datasets will be available online.",
keywords = "Deep convolutional neural network, Motion magnification, Motion manipulation",
author = "Oh, {Tae Hyun} and Ronnachai Jaroensri and Changil Kim and Mohamed Elgharib and Fr{\'e}do Durand and Freeman, {William T.} and Wojciech Matusik",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-030-01225-0_39",
language = "English",
isbn = "9783030012243",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "663--679",
editor = "Vittorio Ferrari and Cristian Sminchisescu and Yair Weiss and Martial Hebert",
booktitle = "Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings",

}

TY - GEN

T1 - Learning-Based Video Motion Magnification

AU - Oh, Tae Hyun

AU - Jaroensri, Ronnachai

AU - Kim, Changil

AU - Elgharib, Mohamed

AU - Durand, Frédo

AU - Freeman, William T.

AU - Matusik, Wojciech

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Video motion magnification techniques allow us to see small motions previously invisible to the naked eyes, such as those of vibrating airplane wings, or swaying buildings under the influence of the wind. Because the motion is small, the magnification results are prone to noise or excessive blurring. The state of the art relies on hand-designed filters to extract representations that may not be optimal. In this paper, we seek to learn the filters directly from examples using deep convolutional neural networks. To make training tractable, we carefully design a synthetic dataset that captures small motion well, and use two-frame input for training. We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods. While our model is not trained with temporal filters, we found that the temporal filters can be used with our extracted representations up to a moderate magnification, enabling a frequency-based motion selection. Finally, we analyze the learned filters and show that they behave similarly to the derivative filters used in previous works. Our code, trained model, and datasets will be available online.

AB - Video motion magnification techniques allow us to see small motions previously invisible to the naked eyes, such as those of vibrating airplane wings, or swaying buildings under the influence of the wind. Because the motion is small, the magnification results are prone to noise or excessive blurring. The state of the art relies on hand-designed filters to extract representations that may not be optimal. In this paper, we seek to learn the filters directly from examples using deep convolutional neural networks. To make training tractable, we carefully design a synthetic dataset that captures small motion well, and use two-frame input for training. We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods. While our model is not trained with temporal filters, we found that the temporal filters can be used with our extracted representations up to a moderate magnification, enabling a frequency-based motion selection. Finally, we analyze the learned filters and show that they behave similarly to the derivative filters used in previous works. Our code, trained model, and datasets will be available online.

KW - Deep convolutional neural network

KW - Motion magnification

KW - Motion manipulation

UR - http://www.scopus.com/inward/record.url?scp=85055432348&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85055432348&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-01225-0_39

DO - 10.1007/978-3-030-01225-0_39

M3 - Conference contribution

AN - SCOPUS:85055432348

SN - 9783030012243

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 663

EP - 679

BT - Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings

A2 - Ferrari, Vittorio

A2 - Sminchisescu, Cristian

A2 - Weiss, Yair

A2 - Hebert, Martial

PB - Springer Verlag

ER -