TU Darmstadt / ULB / TUbiblio

Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning

Cui, Kai ; Koeppl, Heinz (2021)
Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning.
24th International Conference on Artificial Intelligence and Statistics. Virtual Conference (13.04.2021-15.04.2021)
Konferenzveröffentlichung, Bibliographie

Dies ist die neueste Version dieses Eintrags.

Kurzbeschreibung (Abstract)

The recent mean field game (MFG) formalism facilitates otherwise intractable computation of approximate Nash equilibria in many-agent settings. In this paper, we consider discrete-time finite MFGs subject to finite-horizon objectives. We show that all discrete-time finite MFGs with non-constant fixed point operators fail to be contractive as typically assumed in existing MFG literature, barring convergence via fixed point iteration. Instead, we incorporate entropy-regularization and Boltzmann policies into the fixed point iteration. As a result, we obtain provable convergence to approximate fixed points where existing methods fail, and reach the original goal of approximate Nash equilibria. All proposed methods are evaluated with respect to their exploitability, on both instructive examples with tractable exact solutions and high-dimensional problems where exact methods become intractable. In high-dimensional scenarios, we apply established deep reinforcement learning methods and empirically combine fictitious play with our approximations.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Cui, Kai ; Koeppl, Heinz
Art des Eintrags: Bibliographie
Titel: Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning
Sprache: Englisch
Publikationsjahr: 2021
Veranstaltungstitel: 24th International Conference on Artificial Intelligence and Statistics
Veranstaltungsort: Virtual Conference
Veranstaltungsdatum: 13.04.2021-15.04.2021
Zugehörige Links:
Kurzbeschreibung (Abstract):

The recent mean field game (MFG) formalism facilitates otherwise intractable computation of approximate Nash equilibria in many-agent settings. In this paper, we consider discrete-time finite MFGs subject to finite-horizon objectives. We show that all discrete-time finite MFGs with non-constant fixed point operators fail to be contractive as typically assumed in existing MFG literature, barring convergence via fixed point iteration. Instead, we incorporate entropy-regularization and Boltzmann policies into the fixed point iteration. As a result, we obtain provable convergence to approximate fixed points where existing methods fail, and reach the original goal of approximate Nash equilibria. All proposed methods are evaluated with respect to their exploitability, on both instructive examples with tractable exact solutions and high-dimensional problems where exact methods become intractable. In high-dimensional scenarios, we apply established deep reinforcement learning methods and empirically combine fictitious play with our approximations.

Freie Schlagworte: emergenCITY_KOM
Fachbereich(e)/-gebiet(e): 18 Fachbereich Elektrotechnik und Informationstechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik > Bioinspirierte Kommunikationssysteme
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik
LOEWE
LOEWE > LOEWE-Zentren
LOEWE > LOEWE-Zentren > emergenCITY
Zentrale Einrichtungen
Zentrale Einrichtungen > Hochschulrechenzentrum (HRZ)
Zentrale Einrichtungen > Hochschulrechenzentrum (HRZ) > Hochleistungsrechner
TU-Projekte: HMWK|III L6-519/03/05.001-(0016)|emergenCity TP Bock
Hinterlegungsdatum: 22 Feb 2021 07:28
Letzte Änderung: 03 Jul 2024 02:49
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen