TU Darmstadt / ULB / TUbiblio

Learning to Make Compiler Optimizations More Effective

Mammadli, Rahim ; Selakovic, Marija ; Pradel, Michael ; Wolf, Felix (2021)
Learning to Make Compiler Optimizations More Effective.
PLDI '21: 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation. virtual Conference (21.06.2021)
doi: 10.1145/3460945.3464952
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Because loops execute their body many times, compiler developers place much emphasis on their optimization. Nevertheless, in view of highly diverse source code and hardware, compilers still struggle to produce optimal target code. The sheer number of possible loop optimizations, including their combinations, exacerbates the problem further. Today's compilers use hard-coded heuristics to decide when, whether, and which of a limited set of optimizations to apply. Often, this leads to highly unstable behavior, making the success of compiler optimizations dependent on the precise way a loop has been written. This paper presents LoopLearner, which addresses the problem of compiler instability by predicting which way of writing a loop will lead to efficient compiled code. To this end, we train a neural network to find semantically invariant source-level transformations for loops that help the compiler generate more efficient code. Our model learns to extract useful features from the raw source code and predicts the speedup that a given transformation is likely to yield. We evaluate LoopLearner with 1,895 loops from various performance-relevant benchmarks. Applying the transformations that our model deems most favorable prior to compilation yields an average speedup of 1.14x. When trying the top-3 suggested transformations, the average speedup even increases to 1.29x. Comparing the approach with an exhaustive search through all available code transformations shows that LoopLearner helps to identify the most beneficial transformations in several orders of magnitude less time.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Mammadli, Rahim ; Selakovic, Marija ; Pradel, Michael ; Wolf, Felix
Art des Eintrags: Bibliographie
Titel: Learning to Make Compiler Optimizations More Effective
Sprache: Englisch
Publikationsjahr: 20 Juni 2021
Ort: New York, NY, USA
Verlag: ACM
Buchtitel: MAPS 2021: Proceedings of the 5th ACM SIGPLAN International Symposium on Machine Programming
Veranstaltungstitel: PLDI '21: 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation
Veranstaltungsort: virtual Conference
Veranstaltungsdatum: 21.06.2021
DOI: 10.1145/3460945.3464952
Kurzbeschreibung (Abstract):

Because loops execute their body many times, compiler developers place much emphasis on their optimization. Nevertheless, in view of highly diverse source code and hardware, compilers still struggle to produce optimal target code. The sheer number of possible loop optimizations, including their combinations, exacerbates the problem further. Today's compilers use hard-coded heuristics to decide when, whether, and which of a limited set of optimizations to apply. Often, this leads to highly unstable behavior, making the success of compiler optimizations dependent on the precise way a loop has been written. This paper presents LoopLearner, which addresses the problem of compiler instability by predicting which way of writing a loop will lead to efficient compiled code. To this end, we train a neural network to find semantically invariant source-level transformations for loops that help the compiler generate more efficient code. Our model learns to extract useful features from the raw source code and predicts the speedup that a given transformation is likely to yield. We evaluate LoopLearner with 1,895 loops from various performance-relevant benchmarks. Applying the transformations that our model deems most favorable prior to compilation yields an average speedup of 1.14x. When trying the top-3 suggested transformations, the average speedup even increases to 1.29x. Comparing the approach with an exhaustive search through all available code transformations shows that LoopLearner helps to identify the most beneficial transformations in several orders of magnitude less time.

Fachbereich(e)/-gebiet(e): Studienbereiche
20 Fachbereich Informatik
20 Fachbereich Informatik > Parallele Programmierung
Studienbereiche > Studienbereich Computational Engineering
Hinterlegungsdatum: 13 Feb 2024 15:05
Letzte Änderung: 21 Mai 2024 08:14
PPN: 518445658
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen