ReStore: Reusing results of MapReduce jobs

Iman Elghandour, Ashraf Aboulnaga

Research output: Chapter in Book/Report/Conference proceedingChapter

78 Citations (Scopus)

Abstract

Analyzing large scale data has emerged as an important activity for many organizations in the past few years. This large scale data analysis is facilitated by the MapReduce programming and execution model and its implementations, most notably Hadoop. Users of MapReduce often have analysis tasks that are too complex to express as individual MapReduce jobs. Instead, they use high-level query languages such as Pig, Hive, or Jaql to express their complex tasks. The compilers of these languages translate queries into workflows of MapReduce jobs. Each job in these workflows reads its input from the distributed file system used by the MapReduce system and produces output that is stored in this distributed file system and read as input by the next job in the workflow. The current practice is to delete these intermediate results from the distributed file system at the end of executing the workflow. One way to improve the performance of workflows of MapReduce jobs is to keep these intermediate results and reuse them for future workflows submitted to the system. In this paper, we present ReStore, a system that manages the storage and reuse of such intermediate results. ReStore can reuse the output of whole MapReduce jobs that are part of a workflow, and it can also create additional reuse opportunities by materializing and storing the output of query execution operators that are executed within a MapReduce job. We have implemented ReStore as an extension to the Pig dataflow system on top of Hadoop, and we experimentally demonstrate significant speedups on queries from the PigMix benchmark.

Original languageEnglish
Title of host publicationProceedings of the VLDB Endowment
Pages586-597
Number of pages12
Volume5
Edition6
Publication statusPublished - Feb 2012
Externally publishedYes

Fingerprint

Query languages

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Computer Science(all)

Cite this

Elghandour, I., & Aboulnaga, A. (2012). ReStore: Reusing results of MapReduce jobs. In Proceedings of the VLDB Endowment (6 ed., Vol. 5, pp. 586-597)

ReStore : Reusing results of MapReduce jobs. / Elghandour, Iman; Aboulnaga, Ashraf.

Proceedings of the VLDB Endowment. Vol. 5 6. ed. 2012. p. 586-597.

Research output: Chapter in Book/Report/Conference proceedingChapter

Elghandour, I & Aboulnaga, A 2012, ReStore: Reusing results of MapReduce jobs. in Proceedings of the VLDB Endowment. 6 edn, vol. 5, pp. 586-597.
Elghandour I, Aboulnaga A. ReStore: Reusing results of MapReduce jobs. In Proceedings of the VLDB Endowment. 6 ed. Vol. 5. 2012. p. 586-597
Elghandour, Iman ; Aboulnaga, Ashraf. / ReStore : Reusing results of MapReduce jobs. Proceedings of the VLDB Endowment. Vol. 5 6. ed. 2012. pp. 586-597
@inbook{60d47684f50349ad95f4d57e55a760cc,
title = "ReStore: Reusing results of MapReduce jobs",
abstract = "Analyzing large scale data has emerged as an important activity for many organizations in the past few years. This large scale data analysis is facilitated by the MapReduce programming and execution model and its implementations, most notably Hadoop. Users of MapReduce often have analysis tasks that are too complex to express as individual MapReduce jobs. Instead, they use high-level query languages such as Pig, Hive, or Jaql to express their complex tasks. The compilers of these languages translate queries into workflows of MapReduce jobs. Each job in these workflows reads its input from the distributed file system used by the MapReduce system and produces output that is stored in this distributed file system and read as input by the next job in the workflow. The current practice is to delete these intermediate results from the distributed file system at the end of executing the workflow. One way to improve the performance of workflows of MapReduce jobs is to keep these intermediate results and reuse them for future workflows submitted to the system. In this paper, we present ReStore, a system that manages the storage and reuse of such intermediate results. ReStore can reuse the output of whole MapReduce jobs that are part of a workflow, and it can also create additional reuse opportunities by materializing and storing the output of query execution operators that are executed within a MapReduce job. We have implemented ReStore as an extension to the Pig dataflow system on top of Hadoop, and we experimentally demonstrate significant speedups on queries from the PigMix benchmark.",
author = "Iman Elghandour and Ashraf Aboulnaga",
year = "2012",
month = "2",
language = "English",
volume = "5",
pages = "586--597",
booktitle = "Proceedings of the VLDB Endowment",
edition = "6",

}

TY - CHAP

T1 - ReStore

T2 - Reusing results of MapReduce jobs

AU - Elghandour, Iman

AU - Aboulnaga, Ashraf

PY - 2012/2

Y1 - 2012/2

N2 - Analyzing large scale data has emerged as an important activity for many organizations in the past few years. This large scale data analysis is facilitated by the MapReduce programming and execution model and its implementations, most notably Hadoop. Users of MapReduce often have analysis tasks that are too complex to express as individual MapReduce jobs. Instead, they use high-level query languages such as Pig, Hive, or Jaql to express their complex tasks. The compilers of these languages translate queries into workflows of MapReduce jobs. Each job in these workflows reads its input from the distributed file system used by the MapReduce system and produces output that is stored in this distributed file system and read as input by the next job in the workflow. The current practice is to delete these intermediate results from the distributed file system at the end of executing the workflow. One way to improve the performance of workflows of MapReduce jobs is to keep these intermediate results and reuse them for future workflows submitted to the system. In this paper, we present ReStore, a system that manages the storage and reuse of such intermediate results. ReStore can reuse the output of whole MapReduce jobs that are part of a workflow, and it can also create additional reuse opportunities by materializing and storing the output of query execution operators that are executed within a MapReduce job. We have implemented ReStore as an extension to the Pig dataflow system on top of Hadoop, and we experimentally demonstrate significant speedups on queries from the PigMix benchmark.

AB - Analyzing large scale data has emerged as an important activity for many organizations in the past few years. This large scale data analysis is facilitated by the MapReduce programming and execution model and its implementations, most notably Hadoop. Users of MapReduce often have analysis tasks that are too complex to express as individual MapReduce jobs. Instead, they use high-level query languages such as Pig, Hive, or Jaql to express their complex tasks. The compilers of these languages translate queries into workflows of MapReduce jobs. Each job in these workflows reads its input from the distributed file system used by the MapReduce system and produces output that is stored in this distributed file system and read as input by the next job in the workflow. The current practice is to delete these intermediate results from the distributed file system at the end of executing the workflow. One way to improve the performance of workflows of MapReduce jobs is to keep these intermediate results and reuse them for future workflows submitted to the system. In this paper, we present ReStore, a system that manages the storage and reuse of such intermediate results. ReStore can reuse the output of whole MapReduce jobs that are part of a workflow, and it can also create additional reuse opportunities by materializing and storing the output of query execution operators that are executed within a MapReduce job. We have implemented ReStore as an extension to the Pig dataflow system on top of Hadoop, and we experimentally demonstrate significant speedups on queries from the PigMix benchmark.

UR - http://www.scopus.com/inward/record.url?scp=84862655513&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84862655513&partnerID=8YFLogxK

M3 - Chapter

AN - SCOPUS:84862655513

VL - 5

SP - 586

EP - 597

BT - Proceedings of the VLDB Endowment

ER -