TU Darmstadt / ULB / TUbiblio

Method for the application of deep reinforcement learning for optimised control of industrial energy supply systems by the example of a central cooling system

Weigold, Matthias ; Ranzau, Heiko ; Schaumann, Sarah ; Kohne, Thomas ; Panten, Niklas ; Abele, Eberhard (2021)
Method for the application of deep reinforcement learning for optimised control of industrial energy supply systems by the example of a central cooling system.
In: CIRP Annals, 70 (1)
doi: 10.1016/j.cirp.2021.03.021
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

This paper presents a method for data- and model-driven control optimisation for industrial energy supply systems (IESS) by means of deep reinforcement learning (DRL). The method consists of five steps, including system boundary definition and data accumulation, system modelling and validation, implementation of DRL algorithms, performance comparison and adaptation or application of the control strategy. The method is successfully applied to a simulation of an industrial cooling system using the PPO (proximal policy optimisation) algorithm. Significant reductions in electricity cost by 3% to 17% as well as reductions in CO2 emissions by 2% to 11% are achieved. The DRL-based control strategy is interpreted and three main reasons for the performance increase are identified. The DRL controller reduces energy cost by utilizing the storage capacity of the cooling system and moving electricity demand to times of lower prices. Additionally, the DRL-based control strategy for cooling towers as well as compression chillers reduces electricity cost and wear-related cost alike.

Typ des Eintrags: Artikel
Erschienen: 2021
Autor(en): Weigold, Matthias ; Ranzau, Heiko ; Schaumann, Sarah ; Kohne, Thomas ; Panten, Niklas ; Abele, Eberhard
Art des Eintrags: Bibliographie
Titel: Method for the application of deep reinforcement learning for optimised control of industrial energy supply systems by the example of a central cooling system
Sprache: Englisch
Publikationsjahr: 11 Juni 2021
Verlag: Elsevier
Titel der Zeitschrift, Zeitung oder Schriftenreihe: CIRP Annals
Jahrgang/Volume einer Zeitschrift: 70
(Heft-)Nummer: 1
DOI: 10.1016/j.cirp.2021.03.021
URL / URN: https://www.sciencedirect.com/science/article/abs/pii/S00078...
Kurzbeschreibung (Abstract):

This paper presents a method for data- and model-driven control optimisation for industrial energy supply systems (IESS) by means of deep reinforcement learning (DRL). The method consists of five steps, including system boundary definition and data accumulation, system modelling and validation, implementation of DRL algorithms, performance comparison and adaptation or application of the control strategy. The method is successfully applied to a simulation of an industrial cooling system using the PPO (proximal policy optimisation) algorithm. Significant reductions in electricity cost by 3% to 17% as well as reductions in CO2 emissions by 2% to 11% are achieved. The DRL-based control strategy is interpreted and three main reasons for the performance increase are identified. The DRL controller reduces energy cost by utilizing the storage capacity of the cooling system and moving electricity demand to times of lower prices. Additionally, the DRL-based control strategy for cooling towers as well as compression chillers reduces electricity cost and wear-related cost alike.

Freie Schlagworte: CO2 reduced production, Energy Efficiency, Machine learning
Fachbereich(e)/-gebiet(e): 16 Fachbereich Maschinenbau
16 Fachbereich Maschinenbau > Institut für Produktionsmanagement und Werkzeugmaschinen (PTW)
Hinterlegungsdatum: 10 Nov 2021 07:18
Letzte Änderung: 09 Jun 2022 05:15
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen