TU Darmstadt / ULB / TUbiblio

Improving Loop Parallelization by a Combination of Static and Dynamic Analyses in HLS

Dewald, Florian ; Rohde, Johanna ; Hochberger, Christian ; Mantel, Heiko (2022)
Improving Loop Parallelization by a Combination of Static and Dynamic Analyses in HLS.
In: ACM Transactions on Reconfigurable Technology and Systems, 15 (3)
doi: 10.1145/3501801
Article, Bibliographie

Abstract

High-level synthesis (HLS) can be used to create hardware accelerators for compute-intense software parts such as loop structures. Usually, this process requires significant amount of user interaction to steer kernel selection and optimizations. This can be tedious and time-consuming. In this article, we present an approach that fully autonomously finds independent loop iterations and reductions to create parallelized accelerators. We combine static analysis with information available only at runtime to maximize the parallelism exploited by the created accelerators. For loops where we see potential for parallelism, we create fully parallelized kernel implementations. If static information does not suffice to deduce independence, then we assume independence at compile time. We verify this assumption by statically created checks that are dynamically evaluated at runtime, before using the optimized kernel. Evaluating our approach, we can generate speedups for five out of seven benchmarks. With four loop iterations running in parallel, we achieve ideal speedups of up to 4× and on average speedups of 2.27×, both in comparison to an unoptimized accelerator.

Item Type: Article
Erschienen: 2022
Creators: Dewald, Florian ; Rohde, Johanna ; Hochberger, Christian ; Mantel, Heiko
Type of entry: Bibliographie
Title: Improving Loop Parallelization by a Combination of Static and Dynamic Analyses in HLS
Language: English
Date: 4 February 2022
Publisher: ACM
Journal or Publication Title: ACM Transactions on Reconfigurable Technology and Systems
Volume of the journal: 15
Issue Number: 3
DOI: 10.1145/3501801
Abstract:

High-level synthesis (HLS) can be used to create hardware accelerators for compute-intense software parts such as loop structures. Usually, this process requires significant amount of user interaction to steer kernel selection and optimizations. This can be tedious and time-consuming. In this article, we present an approach that fully autonomously finds independent loop iterations and reductions to create parallelized accelerators. We combine static analysis with information available only at runtime to maximize the parallelism exploited by the created accelerators. For loops where we see potential for parallelism, we create fully parallelized kernel implementations. If static information does not suffice to deduce independence, then we assume independence at compile time. We verify this assumption by statically created checks that are dynamically evaluated at runtime, before using the optimized kernel. Evaluating our approach, we can generate speedups for five out of seven benchmarks. With four loop iterations running in parallel, we achieve ideal speedups of up to 4× and on average speedups of 2.27×, both in comparison to an unoptimized accelerator.

Uncontrolled Keywords: loop parallelization, scalar evolution analysis, high-level synthesis, system-on-chip, FPGA
Additional Information:

Art.No.: 31

Divisions: 18 Department of Electrical Engineering and Information Technology
18 Department of Electrical Engineering and Information Technology > Institute of Computer Engineering
18 Department of Electrical Engineering and Information Technology > Institute of Computer Engineering > Computer Systems Group
Date Deposited: 11 Apr 2024 12:35
Last Modified: 18 Jul 2024 14:58
PPN: 520009118
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)
Show editorial Details Show editorial Details