TU Darmstadt / ULB / TUbiblio

Compiler-enabled optimization of persistent MPI Operations

Jammer, Tim ; Bischof, Christian (2022)
Compiler-enabled optimization of persistent MPI Operations.
SC22: The International Conference for High Performance Computing, Networking, Storage and Analysis. Dallas, USA (13.11.2022-18.11.2022)
doi: 10.1109/ExaMPI56604.2022.00006
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, which specify recurring communication patterns. The idea is that the usage of those operations can result in a performance benefit compared to the standard non-blocking communication. But in current MPI implementations, this performance benefit is not really observable. We determine the message envelope matching as one of the causes of overhead. Unfortunately, this matching can only hardly be overlapped with computation. In this work, we explore how compiler knowledge can be used to extract more performance benefit from the usage of persistent operations. We find that the compiler can do some of the required matching work for persistent MPI operations. As persistent MPI requests can be used multiple times, the compiler can, in some cases, prove that message matching is only needed for the first occurrence and can be entirely skipped for subsequent instances.

In this paper, we present the required compiler analysis, as well as an implementation of a communication scheme that skips the message envelope matching and directly transfers the data via RDMA instead. This allows us to substantially reduce the communication overhead that cannot be overlapped with computation. Using the Intel IMB-ASYNC Benchmark, we can see a communication overhead reduction of up to 95 percent for larger message sizes.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Jammer, Tim ; Bischof, Christian
Art des Eintrags: Bibliographie
Titel: Compiler-enabled optimization of persistent MPI Operations
Sprache: Englisch
Publikationsjahr: November 2022
Verlag: IEEE
Buchtitel: Proceedings of ExaMPI 2022: Workshop on Exascale MPI
Veranstaltungstitel: SC22: The International Conference for High Performance Computing, Networking, Storage and Analysis
Veranstaltungsort: Dallas, USA
Veranstaltungsdatum: 13.11.2022-18.11.2022
DOI: 10.1109/ExaMPI56604.2022.00006
Kurzbeschreibung (Abstract):

MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, which specify recurring communication patterns. The idea is that the usage of those operations can result in a performance benefit compared to the standard non-blocking communication. But in current MPI implementations, this performance benefit is not really observable. We determine the message envelope matching as one of the causes of overhead. Unfortunately, this matching can only hardly be overlapped with computation. In this work, we explore how compiler knowledge can be used to extract more performance benefit from the usage of persistent operations. We find that the compiler can do some of the required matching work for persistent MPI operations. As persistent MPI requests can be used multiple times, the compiler can, in some cases, prove that message matching is only needed for the first occurrence and can be entirely skipped for subsequent instances.

In this paper, we present the required compiler analysis, as well as an implementation of a communication scheme that skips the message envelope matching and directly transfers the data via RDMA instead. This allows us to substantially reduce the communication overhead that cannot be overlapped with computation. Using the Intel IMB-ASYNC Benchmark, we can see a communication overhead reduction of up to 95 percent for larger message sizes.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Parallele Programmierung
20 Fachbereich Informatik > Scientific Computing
Zentrale Einrichtungen
Zentrale Einrichtungen > Hochschulrechenzentrum (HRZ)
Zentrale Einrichtungen > Hochschulrechenzentrum (HRZ) > Hochleistungsrechner
Hinterlegungsdatum: 27 Feb 2023 13:57
Letzte Änderung: 27 Jun 2023 15:40
PPN: 509088732
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen