TU Darmstadt / ULB / TUbiblio

Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior

Cui, Kai ; Hauck, Sascha ; Fabian, Christian ; Koeppl, Heinz (2024)
Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior.
International Conference on Learning Representations. Vienna, Austria (07.05.2024 - 11.05.2024)
doi: 10.26083/tuprints-00028689
Konferenzveröffentlichung, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Recent reinforcement learning (RL) methods have achieved success in various domains. However, multi-agent RL (MARL) remains a challenge in terms of decentralization, partial observability and scalability to many agents. Meanwhile, collective behavior requires resolution of the aforementioned challenges, and remains of importance to many state-of-the-art applications such as active matter physics, self-organizing systems, opinion dynamics, and biological or robotic swarms. Here, MARL via mean field control (MFC) offers a potential solution to scalability, but fails to consider decentralized and partially observable systems. In this paper, we enable decentralized behavior of agents under partial information by proposing novel models for decentralized partially observable MFC (Dec-POMFC), a broad class of problems with permutation-invariant agents allowing for reduction to tractable single-agent Markov decision processes (MDP) with single-agent RL solution. We provide rigorous theoretical results, including a dynamic programming principle, together with optimality guarantees for Dec-POMFC solutions applied to finite swarms of interest. Algorithmically, we propose Dec-POMFC-based policy gradient methods for MARL via centralized training and decentralized execution, together with policy gradient approximation guarantees. In addition, we improve upon state-of-the-art histogram-based MFC by kernel methods, which is of separate interest also for fully observable MFC. We evaluate numerically on representative collective behavior tasks such as adapted Kuramoto and Vicsek swarming models, being on par with state-of-the-art MARL. Overall, our framework takes a step towards RL-based engineering of artificial collective behavior via MFC.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2024
Autor(en): Cui, Kai ; Hauck, Sascha ; Fabian, Christian ; Koeppl, Heinz
Art des Eintrags: Zweitveröffentlichung
Titel: Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior
Sprache: Englisch
Publikationsjahr: 25 November 2024
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 8 Mai 2024
Verlag: ICLR
Buchtitel: ICLR 2024 The Twelfth International Conference on Learning Representations
Kollation: 40 Seiten
Veranstaltungstitel: International Conference on Learning Representations
Veranstaltungsort: Vienna, Austria
Veranstaltungsdatum: 07.05.2024 - 11.05.2024
DOI: 10.26083/tuprints-00028689
URL / URN: https://tuprints.ulb.tu-darmstadt.de/28689
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

Recent reinforcement learning (RL) methods have achieved success in various domains. However, multi-agent RL (MARL) remains a challenge in terms of decentralization, partial observability and scalability to many agents. Meanwhile, collective behavior requires resolution of the aforementioned challenges, and remains of importance to many state-of-the-art applications such as active matter physics, self-organizing systems, opinion dynamics, and biological or robotic swarms. Here, MARL via mean field control (MFC) offers a potential solution to scalability, but fails to consider decentralized and partially observable systems. In this paper, we enable decentralized behavior of agents under partial information by proposing novel models for decentralized partially observable MFC (Dec-POMFC), a broad class of problems with permutation-invariant agents allowing for reduction to tractable single-agent Markov decision processes (MDP) with single-agent RL solution. We provide rigorous theoretical results, including a dynamic programming principle, together with optimality guarantees for Dec-POMFC solutions applied to finite swarms of interest. Algorithmically, we propose Dec-POMFC-based policy gradient methods for MARL via centralized training and decentralized execution, together with policy gradient approximation guarantees. In addition, we improve upon state-of-the-art histogram-based MFC by kernel methods, which is of separate interest also for fully observable MFC. We evaluate numerically on representative collective behavior tasks such as adapted Kuramoto and Vicsek swarming models, being on par with state-of-the-art MARL. Overall, our framework takes a step towards RL-based engineering of artificial collective behavior via MFC.

Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-286893
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
600 Technik, Medizin, angewandte Wissenschaften > 621.3 Elektrotechnik, Elektronik
Fachbereich(e)/-gebiet(e): 18 Fachbereich Elektrotechnik und Informationstechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik > Bioinspirierte Kommunikationssysteme
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Self-Organizing Systems Lab
Hinterlegungsdatum: 25 Nov 2024 10:49
Letzte Änderung: 28 Nov 2024 08:46
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen