On the efficacy of GPU-integrated MPI for scientific applications

Ashwin M. Aji, Lokendra S. Panwar, Feng Ji, Milind Chabbi, Karthik Murthy, Pavan Balaji, Keith R. Bisset, James Dinan, Wu Chun Feng, John Mellor-Crummey, Xiaosong Ma, Rajeev Thakur

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

Scientific computing applications are quickly adapting to leverage the massive parallelism of GPUs in large-scale clusters. However, the current hybrid programming models require application developers to explicitly manage the disjointed host and GPU memories, thus reducing both efficiency and productivity. Consequently, GPU-integrated MPI solutions, such as MPI-ACC and MVAPICH2-GPU, have been developed that provide unified programming interfaces and optimized implementations for end-to-end data communication among CPUs and GPUs. To date, however, there lacks an in-depth performance characterization of the new optimization spaces or the productivity impact of such GPU-integrated communication systems for scientific applications. In this paper, we study the efficacy of GPU-integrated MPI on scientific applications from domains such as epidemiology simulation and seismology modeling, and we discuss the lessons learned. We use MPI-ACC as an example implementation and demonstrate how the programmer can seamlessly choose between either the CPU or the GPU as the logical communication end point, depending on the application's computational requirements. MPI-ACC also encourages programmers to explore novel application-specific optimizations, such as internode CPU-GPU communication with concurrent CPU-GPU computations, which can improve the overall cluster utilization. Furthermore, MPI-ACC internally implements scalable memory management techniques, thereby decoupling the low-level memory optimizations from the applications and making them scalable and portable across several architectures. Experimental results from a state-of-the-art cluster with hundreds of GPUs show that the MPI-ACC - driven new application-specific optimizations can improve the performance of an epidemiology simulation by up to 61.6% and the performance of a seismology modeling application by up to 44%, when compared with traditional hybrid MPI+GPU implementations. We conclude that GPU-integrated MPI significantly enhances programmer productivity and has the potential to improve the performance and portability of scientific applications, thus making a significant step toward GPUs being 'first-class citizens' of hybrid CPU-GPU clusters.

Original languageEnglish
Title of host publicationHPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing
Pages191-202
Number of pages12
DOIs
Publication statusPublished - 17 Jul 2013
Externally publishedYes
Event22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013 - New York, NY, United States
Duration: 17 Jun 201321 Jun 2013

Other

Other22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013
CountryUnited States
CityNew York, NY
Period17/6/1321/6/13

Fingerprint

Program processors
Seismology
Epidemiology
Productivity
Graphics processing unit
Data storage equipment
Communication
Natural sciences computing
Communication systems

Keywords

  • computational epidemiology
  • GPGPU
  • MPI
  • MPI-ACC
  • seismology

ASJC Scopus subject areas

  • Software

Cite this

Aji, A. M., Panwar, L. S., Ji, F., Chabbi, M., Murthy, K., Balaji, P., ... Thakur, R. (2013). On the efficacy of GPU-integrated MPI for scientific applications. In HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing (pp. 191-202) https://doi.org/10.1145/2462902.2462915

On the efficacy of GPU-integrated MPI for scientific applications. / Aji, Ashwin M.; Panwar, Lokendra S.; Ji, Feng; Chabbi, Milind; Murthy, Karthik; Balaji, Pavan; Bisset, Keith R.; Dinan, James; Feng, Wu Chun; Mellor-Crummey, John; Ma, Xiaosong; Thakur, Rajeev.

HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing. 2013. p. 191-202.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Aji, AM, Panwar, LS, Ji, F, Chabbi, M, Murthy, K, Balaji, P, Bisset, KR, Dinan, J, Feng, WC, Mellor-Crummey, J, Ma, X & Thakur, R 2013, On the efficacy of GPU-integrated MPI for scientific applications. in HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing. pp. 191-202, 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013, New York, NY, United States, 17/6/13. https://doi.org/10.1145/2462902.2462915
Aji AM, Panwar LS, Ji F, Chabbi M, Murthy K, Balaji P et al. On the efficacy of GPU-integrated MPI for scientific applications. In HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing. 2013. p. 191-202 https://doi.org/10.1145/2462902.2462915
Aji, Ashwin M. ; Panwar, Lokendra S. ; Ji, Feng ; Chabbi, Milind ; Murthy, Karthik ; Balaji, Pavan ; Bisset, Keith R. ; Dinan, James ; Feng, Wu Chun ; Mellor-Crummey, John ; Ma, Xiaosong ; Thakur, Rajeev. / On the efficacy of GPU-integrated MPI for scientific applications. HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing. 2013. pp. 191-202
@inproceedings{9dfb5a1a2747467e8bdf00f51030ca65,
title = "On the efficacy of GPU-integrated MPI for scientific applications",
abstract = "Scientific computing applications are quickly adapting to leverage the massive parallelism of GPUs in large-scale clusters. However, the current hybrid programming models require application developers to explicitly manage the disjointed host and GPU memories, thus reducing both efficiency and productivity. Consequently, GPU-integrated MPI solutions, such as MPI-ACC and MVAPICH2-GPU, have been developed that provide unified programming interfaces and optimized implementations for end-to-end data communication among CPUs and GPUs. To date, however, there lacks an in-depth performance characterization of the new optimization spaces or the productivity impact of such GPU-integrated communication systems for scientific applications. In this paper, we study the efficacy of GPU-integrated MPI on scientific applications from domains such as epidemiology simulation and seismology modeling, and we discuss the lessons learned. We use MPI-ACC as an example implementation and demonstrate how the programmer can seamlessly choose between either the CPU or the GPU as the logical communication end point, depending on the application's computational requirements. MPI-ACC also encourages programmers to explore novel application-specific optimizations, such as internode CPU-GPU communication with concurrent CPU-GPU computations, which can improve the overall cluster utilization. Furthermore, MPI-ACC internally implements scalable memory management techniques, thereby decoupling the low-level memory optimizations from the applications and making them scalable and portable across several architectures. Experimental results from a state-of-the-art cluster with hundreds of GPUs show that the MPI-ACC - driven new application-specific optimizations can improve the performance of an epidemiology simulation by up to 61.6{\%} and the performance of a seismology modeling application by up to 44{\%}, when compared with traditional hybrid MPI+GPU implementations. We conclude that GPU-integrated MPI significantly enhances programmer productivity and has the potential to improve the performance and portability of scientific applications, thus making a significant step toward GPUs being 'first-class citizens' of hybrid CPU-GPU clusters.",
keywords = "computational epidemiology, GPGPU, MPI, MPI-ACC, seismology",
author = "Aji, {Ashwin M.} and Panwar, {Lokendra S.} and Feng Ji and Milind Chabbi and Karthik Murthy and Pavan Balaji and Bisset, {Keith R.} and James Dinan and Feng, {Wu Chun} and John Mellor-Crummey and Xiaosong Ma and Rajeev Thakur",
year = "2013",
month = "7",
day = "17",
doi = "10.1145/2462902.2462915",
language = "English",
pages = "191--202",
booktitle = "HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing",

}

TY - GEN

T1 - On the efficacy of GPU-integrated MPI for scientific applications

AU - Aji, Ashwin M.

AU - Panwar, Lokendra S.

AU - Ji, Feng

AU - Chabbi, Milind

AU - Murthy, Karthik

AU - Balaji, Pavan

AU - Bisset, Keith R.

AU - Dinan, James

AU - Feng, Wu Chun

AU - Mellor-Crummey, John

AU - Ma, Xiaosong

AU - Thakur, Rajeev

PY - 2013/7/17

Y1 - 2013/7/17

N2 - Scientific computing applications are quickly adapting to leverage the massive parallelism of GPUs in large-scale clusters. However, the current hybrid programming models require application developers to explicitly manage the disjointed host and GPU memories, thus reducing both efficiency and productivity. Consequently, GPU-integrated MPI solutions, such as MPI-ACC and MVAPICH2-GPU, have been developed that provide unified programming interfaces and optimized implementations for end-to-end data communication among CPUs and GPUs. To date, however, there lacks an in-depth performance characterization of the new optimization spaces or the productivity impact of such GPU-integrated communication systems for scientific applications. In this paper, we study the efficacy of GPU-integrated MPI on scientific applications from domains such as epidemiology simulation and seismology modeling, and we discuss the lessons learned. We use MPI-ACC as an example implementation and demonstrate how the programmer can seamlessly choose between either the CPU or the GPU as the logical communication end point, depending on the application's computational requirements. MPI-ACC also encourages programmers to explore novel application-specific optimizations, such as internode CPU-GPU communication with concurrent CPU-GPU computations, which can improve the overall cluster utilization. Furthermore, MPI-ACC internally implements scalable memory management techniques, thereby decoupling the low-level memory optimizations from the applications and making them scalable and portable across several architectures. Experimental results from a state-of-the-art cluster with hundreds of GPUs show that the MPI-ACC - driven new application-specific optimizations can improve the performance of an epidemiology simulation by up to 61.6% and the performance of a seismology modeling application by up to 44%, when compared with traditional hybrid MPI+GPU implementations. We conclude that GPU-integrated MPI significantly enhances programmer productivity and has the potential to improve the performance and portability of scientific applications, thus making a significant step toward GPUs being 'first-class citizens' of hybrid CPU-GPU clusters.

AB - Scientific computing applications are quickly adapting to leverage the massive parallelism of GPUs in large-scale clusters. However, the current hybrid programming models require application developers to explicitly manage the disjointed host and GPU memories, thus reducing both efficiency and productivity. Consequently, GPU-integrated MPI solutions, such as MPI-ACC and MVAPICH2-GPU, have been developed that provide unified programming interfaces and optimized implementations for end-to-end data communication among CPUs and GPUs. To date, however, there lacks an in-depth performance characterization of the new optimization spaces or the productivity impact of such GPU-integrated communication systems for scientific applications. In this paper, we study the efficacy of GPU-integrated MPI on scientific applications from domains such as epidemiology simulation and seismology modeling, and we discuss the lessons learned. We use MPI-ACC as an example implementation and demonstrate how the programmer can seamlessly choose between either the CPU or the GPU as the logical communication end point, depending on the application's computational requirements. MPI-ACC also encourages programmers to explore novel application-specific optimizations, such as internode CPU-GPU communication with concurrent CPU-GPU computations, which can improve the overall cluster utilization. Furthermore, MPI-ACC internally implements scalable memory management techniques, thereby decoupling the low-level memory optimizations from the applications and making them scalable and portable across several architectures. Experimental results from a state-of-the-art cluster with hundreds of GPUs show that the MPI-ACC - driven new application-specific optimizations can improve the performance of an epidemiology simulation by up to 61.6% and the performance of a seismology modeling application by up to 44%, when compared with traditional hybrid MPI+GPU implementations. We conclude that GPU-integrated MPI significantly enhances programmer productivity and has the potential to improve the performance and portability of scientific applications, thus making a significant step toward GPUs being 'first-class citizens' of hybrid CPU-GPU clusters.

KW - computational epidemiology

KW - GPGPU

KW - MPI

KW - MPI-ACC

KW - seismology

UR - http://www.scopus.com/inward/record.url?scp=84880052591&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880052591&partnerID=8YFLogxK

U2 - 10.1145/2462902.2462915

DO - 10.1145/2462902.2462915

M3 - Conference contribution

SP - 191

EP - 202

BT - HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing

ER -