Abstract
Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.
Original language | English |
---|---|
Title of host publication | 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1-6 |
Number of pages | 6 |
Volume | 2018-January |
ISBN (Electronic) | 9781509050192 |
DOIs | |
Publication status | Published - 10 Jan 2018 |
Externally published | Yes |
Event | 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Singapore, Singapore Duration: 4 Dec 2017 → 8 Dec 2017 |
Other
Other | 2017 IEEE Global Communications Conference, GLOBECOM 2017 |
---|---|
Country | Singapore |
City | Singapore |
Period | 4/12/17 → 8/12/17 |
Fingerprint
ASJC Scopus subject areas
- Computer Networks and Communications
- Hardware and Architecture
- Safety, Risk, Reliability and Quality
Cite this
LERC : Coordinated cache management for data-parallel systems. / Yu, Yinghao; Wang, Wei; Zhang, Jun; Letaief, Khaled.
2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 1-6.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - LERC
T2 - Coordinated cache management for data-parallel systems
AU - Yu, Yinghao
AU - Wang, Wei
AU - Zhang, Jun
AU - Letaief, Khaled
PY - 2018/1/10
Y1 - 2018/1/10
N2 - Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.
AB - Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.
UR - http://www.scopus.com/inward/record.url?scp=85046453765&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046453765&partnerID=8YFLogxK
U2 - 10.1109/GLOCOM.2017.8254999
DO - 10.1109/GLOCOM.2017.8254999
M3 - Conference contribution
AN - SCOPUS:85046453765
VL - 2018-January
SP - 1
EP - 6
BT - 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
ER -