Tuning high-performance scientific codes

The use of performance models to control resource usage during data migration and I/O

J. Lee, M. Winslett, Xiaosong Ma, S. Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

Large-scale parallel simulations are a popular tool for investigating phenomena ranging from nuclear explosions to protein folding. These codes produce copious output that must be moved to the workstation where it will be visualized. Scientists have a variety of tools to help them with this data movement, and often have several different platforms available to them for their runs. Thus questions arise such as, which data migration approach is best for a particular code and platform? Which will provide the best end-to-end response time, or lowest cost? Scientists also control how much data is output, and how often. From a scientific perspective, the more output the better; but from a cost and response time perspective, how much output is too much? To answer these questions, we built performance models for data migration approaches and verified them on parallel and sequential platforms. We use a 3D hydrodynamics code to show how scientists can use the models to predict performance and tune the I/O aspects of their codes.

Original languageEnglish
Title of host publicationProceedings of the International Conference on Supercomputing
Pages181-195
Number of pages15
Publication statusPublished - 1 Jan 2001
Externally publishedYes
Event2001 International Conference on Supercomputing - Sorento, Italy
Duration: 17 Jun 200121 Jun 2001

Other

Other2001 International Conference on Supercomputing
CountryItaly
CitySorento
Period17/6/0121/6/01

Fingerprint

Tuning
Nuclear explosions
Protein folding
Costs
Hydrodynamics

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Lee, J., Winslett, M., Ma, X., & Yu, S. (2001). Tuning high-performance scientific codes: The use of performance models to control resource usage during data migration and I/O. In Proceedings of the International Conference on Supercomputing (pp. 181-195)

Tuning high-performance scientific codes : The use of performance models to control resource usage during data migration and I/O. / Lee, J.; Winslett, M.; Ma, Xiaosong; Yu, S.

Proceedings of the International Conference on Supercomputing. 2001. p. 181-195.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, J, Winslett, M, Ma, X & Yu, S 2001, Tuning high-performance scientific codes: The use of performance models to control resource usage during data migration and I/O. in Proceedings of the International Conference on Supercomputing. pp. 181-195, 2001 International Conference on Supercomputing, Sorento, Italy, 17/6/01.
Lee J, Winslett M, Ma X, Yu S. Tuning high-performance scientific codes: The use of performance models to control resource usage during data migration and I/O. In Proceedings of the International Conference on Supercomputing. 2001. p. 181-195
Lee, J. ; Winslett, M. ; Ma, Xiaosong ; Yu, S. / Tuning high-performance scientific codes : The use of performance models to control resource usage during data migration and I/O. Proceedings of the International Conference on Supercomputing. 2001. pp. 181-195
@inproceedings{abab84aaef09472184d793151eb3e98e,
title = "Tuning high-performance scientific codes: The use of performance models to control resource usage during data migration and I/O",
abstract = "Large-scale parallel simulations are a popular tool for investigating phenomena ranging from nuclear explosions to protein folding. These codes produce copious output that must be moved to the workstation where it will be visualized. Scientists have a variety of tools to help them with this data movement, and often have several different platforms available to them for their runs. Thus questions arise such as, which data migration approach is best for a particular code and platform? Which will provide the best end-to-end response time, or lowest cost? Scientists also control how much data is output, and how often. From a scientific perspective, the more output the better; but from a cost and response time perspective, how much output is too much? To answer these questions, we built performance models for data migration approaches and verified them on parallel and sequential platforms. We use a 3D hydrodynamics code to show how scientists can use the models to predict performance and tune the I/O aspects of their codes.",
author = "J. Lee and M. Winslett and Xiaosong Ma and S. Yu",
year = "2001",
month = "1",
day = "1",
language = "English",
pages = "181--195",
booktitle = "Proceedings of the International Conference on Supercomputing",

}

TY - GEN

T1 - Tuning high-performance scientific codes

T2 - The use of performance models to control resource usage during data migration and I/O

AU - Lee, J.

AU - Winslett, M.

AU - Ma, Xiaosong

AU - Yu, S.

PY - 2001/1/1

Y1 - 2001/1/1

N2 - Large-scale parallel simulations are a popular tool for investigating phenomena ranging from nuclear explosions to protein folding. These codes produce copious output that must be moved to the workstation where it will be visualized. Scientists have a variety of tools to help them with this data movement, and often have several different platforms available to them for their runs. Thus questions arise such as, which data migration approach is best for a particular code and platform? Which will provide the best end-to-end response time, or lowest cost? Scientists also control how much data is output, and how often. From a scientific perspective, the more output the better; but from a cost and response time perspective, how much output is too much? To answer these questions, we built performance models for data migration approaches and verified them on parallel and sequential platforms. We use a 3D hydrodynamics code to show how scientists can use the models to predict performance and tune the I/O aspects of their codes.

AB - Large-scale parallel simulations are a popular tool for investigating phenomena ranging from nuclear explosions to protein folding. These codes produce copious output that must be moved to the workstation where it will be visualized. Scientists have a variety of tools to help them with this data movement, and often have several different platforms available to them for their runs. Thus questions arise such as, which data migration approach is best for a particular code and platform? Which will provide the best end-to-end response time, or lowest cost? Scientists also control how much data is output, and how often. From a scientific perspective, the more output the better; but from a cost and response time perspective, how much output is too much? To answer these questions, we built performance models for data migration approaches and verified them on parallel and sequential platforms. We use a 3D hydrodynamics code to show how scientists can use the models to predict performance and tune the I/O aspects of their codes.

UR - http://www.scopus.com/inward/record.url?scp=0034818963&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034818963&partnerID=8YFLogxK

M3 - Conference contribution

SP - 181

EP - 195

BT - Proceedings of the International Conference on Supercomputing

ER -