Hanafy, Yasmin Adel ; Mashaly, Maggie ; Abd El Ghany, Mohamed A. (2021)
An Efficient Hardware Design for a Low-Latency Traffic Flow Prediction System Using an Online Neural Network.
In: Electronics, 10 (16)
doi: 10.3390/electronics10161875
Artikel, Bibliographie
Dies ist die neueste Version dieses Eintrags.
Kurzbeschreibung (Abstract)
Neural networks are computing systems inspired by the biological neural networks in human brains. They are trained in a batch learning mode; hence, the whole training data should be ready before the training task. However, this is not applicable for many real-time applications where data arrive sequentially such as online topic-detection in social communities, traffic flow prediction, etc. In this paper, an efficient hardware implementation of a low-latency online neural network system is proposed for a traffic flow prediction application. The proposed model is implemented with different Machine Learning (ML) algorithms to predict the traffic flow with high accuracy where the Hedge Backpropagation (HBP) model achieves the least mean absolute error (MAE) of 0.001. The proposed system is implemented using floating point and fixed point arithmetics on Field Programmable Gate Array (FPGA) part of the ZedBoard. The implementation is provided using BRAM architecture and distributed memory in FPGA in order to achieve the best trade-off between latency, the consumption of area, and power. Using the fixed point approach, the prediction times using the distributed memory and BRAM architectures are 150 ns and 420 ns, respectively. The area delay product (ADP) of the proposed system is reduced by 17 × compared with the hardware implementation of the latest proposed system in the literature. The execution time of the proposed hardware system is improved by 200 × compared with the software implemented on a dual core Intel i7-7500U CPU at 2.9 GHz. Consequently, the proposed hardware model is faster than the software model and more suitable for time-critical online machine learning models.
Typ des Eintrags: | Artikel |
---|---|
Erschienen: | 2021 |
Autor(en): | Hanafy, Yasmin Adel ; Mashaly, Maggie ; Abd El Ghany, Mohamed A. |
Art des Eintrags: | Bibliographie |
Titel: | An Efficient Hardware Design for a Low-Latency Traffic Flow Prediction System Using an Online Neural Network |
Sprache: | Englisch |
Publikationsjahr: | 4 August 2021 |
Ort: | Basel |
Verlag: | MDPI |
Titel der Zeitschrift, Zeitung oder Schriftenreihe: | Electronics |
Jahrgang/Volume einer Zeitschrift: | 10 |
(Heft-)Nummer: | 16 |
Kollation: | 22 Seiten |
DOI: | 10.3390/electronics10161875 |
Zugehörige Links: | |
Kurzbeschreibung (Abstract): | Neural networks are computing systems inspired by the biological neural networks in human brains. They are trained in a batch learning mode; hence, the whole training data should be ready before the training task. However, this is not applicable for many real-time applications where data arrive sequentially such as online topic-detection in social communities, traffic flow prediction, etc. In this paper, an efficient hardware implementation of a low-latency online neural network system is proposed for a traffic flow prediction application. The proposed model is implemented with different Machine Learning (ML) algorithms to predict the traffic flow with high accuracy where the Hedge Backpropagation (HBP) model achieves the least mean absolute error (MAE) of 0.001. The proposed system is implemented using floating point and fixed point arithmetics on Field Programmable Gate Array (FPGA) part of the ZedBoard. The implementation is provided using BRAM architecture and distributed memory in FPGA in order to achieve the best trade-off between latency, the consumption of area, and power. Using the fixed point approach, the prediction times using the distributed memory and BRAM architectures are 150 ns and 420 ns, respectively. The area delay product (ADP) of the proposed system is reduced by 17 × compared with the hardware implementation of the latest proposed system in the literature. The execution time of the proposed hardware system is improved by 200 × compared with the software implemented on a dual core Intel i7-7500U CPU at 2.9 GHz. Consequently, the proposed hardware model is faster than the software model and more suitable for time-critical online machine learning models. |
Freie Schlagworte: | direct memory access, field programmable gate array, Hedge Back Propagation, online neural network |
Zusätzliche Informationen: | Erstveröffentlichung; Art.No.: 1875; This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS) |
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 600 Technik, Medizin, angewandte Wissenschaften > 621.3 Elektrotechnik, Elektronik |
Fachbereich(e)/-gebiet(e): | 18 Fachbereich Elektrotechnik und Informationstechnik 18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Datentechnik 18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Datentechnik > Integrierte Elektronische Systeme (IES) |
Hinterlegungsdatum: | 28 Feb 2024 07:38 |
Letzte Änderung: | 28 Feb 2024 07:38 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Verfügbare Versionen dieses Eintrags
-
An Efficient Hardware Design for a Low-Latency Traffic Flow Prediction System Using an Online Neural Network. (deposited 12 Jan 2024 14:51)
- An Efficient Hardware Design for a Low-Latency Traffic Flow Prediction System Using an Online Neural Network. (deposited 28 Feb 2024 07:38) [Gegenwärtig angezeigt]
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |