LERC: Coordinated cache management for data-parallel systems

Yinghao Yu, Wei Wang, Jun Zhang, Khaled Letaief

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.

Original languageEnglish
Title of host publication2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-6
Number of pages6
Volume2018-January
ISBN (Electronic)9781509050192
DOIs
Publication statusPublished - 10 Jan 2018
Externally publishedYes
Event2017 IEEE Global Communications Conference, GLOBECOM 2017 - Singapore, Singapore
Duration: 4 Dec 20178 Dec 2017

Other

Other2017 IEEE Global Communications Conference, GLOBECOM 2017
CountrySingapore
CitySingapore
Period4/12/178/12/17

Fingerprint

Data storage equipment
Electric sparks
Cache memory
Managers

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Safety, Risk, Reliability and Quality

Cite this

Yu, Y., Wang, W., Zhang, J., & Letaief, K. (2018). LERC: Coordinated cache management for data-parallel systems. In 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings (Vol. 2018-January, pp. 1-6). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/GLOCOM.2017.8254999

LERC : Coordinated cache management for data-parallel systems. / Yu, Yinghao; Wang, Wei; Zhang, Jun; Letaief, Khaled.

2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 1-6.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yu, Y, Wang, W, Zhang, J & Letaief, K 2018, LERC: Coordinated cache management for data-parallel systems. in 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 1-6, 2017 IEEE Global Communications Conference, GLOBECOM 2017, Singapore, Singapore, 4/12/17. https://doi.org/10.1109/GLOCOM.2017.8254999
Yu Y, Wang W, Zhang J, Letaief K. LERC: Coordinated cache management for data-parallel systems. In 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1-6 https://doi.org/10.1109/GLOCOM.2017.8254999
Yu, Yinghao ; Wang, Wei ; Zhang, Jun ; Letaief, Khaled. / LERC : Coordinated cache management for data-parallel systems. 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1-6
@inproceedings{121fb73c374e487da5d5c9d75b13a87f,
title = "LERC: Coordinated cache management for data-parallel systems",
abstract = "Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37{\%} compared with the widely employed least-recently-used (LRU) policy.",
author = "Yinghao Yu and Wei Wang and Jun Zhang and Khaled Letaief",
year = "2018",
month = "1",
day = "10",
doi = "10.1109/GLOCOM.2017.8254999",
language = "English",
volume = "2018-January",
pages = "1--6",
booktitle = "2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - LERC

T2 - Coordinated cache management for data-parallel systems

AU - Yu, Yinghao

AU - Wang, Wei

AU - Zhang, Jun

AU - Letaief, Khaled

PY - 2018/1/10

Y1 - 2018/1/10

N2 - Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.

AB - Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.

UR - http://www.scopus.com/inward/record.url?scp=85046453765&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85046453765&partnerID=8YFLogxK

U2 - 10.1109/GLOCOM.2017.8254999

DO - 10.1109/GLOCOM.2017.8254999

M3 - Conference contribution

AN - SCOPUS:85046453765

VL - 2018-January

SP - 1

EP - 6

BT - 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -