Belousov, Boris ; Peters, Jan (2019)
Entropic Regularization of Markov Decision Processes.
In: Entropy, 2019, 21 (7)
Artikel, Zweitveröffentlichung
Es ist eine neuere Version dieses Eintrags verfügbar. |
Kurzbeschreibung (Abstract)
An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α-divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α-divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.
Typ des Eintrags: | Artikel |
---|---|
Erschienen: | 2019 |
Autor(en): | Belousov, Boris ; Peters, Jan |
Art des Eintrags: | Zweitveröffentlichung |
Titel: | Entropic Regularization of Markov Decision Processes |
Sprache: | Englisch |
Publikationsjahr: | 2019 |
Ort: | Darmstadt |
Publikationsdatum der Erstveröffentlichung: | 2019 |
Verlag: | MDPI |
Titel der Zeitschrift, Zeitung oder Schriftenreihe: | Entropy |
Jahrgang/Volume einer Zeitschrift: | 21 |
(Heft-)Nummer: | 7 |
URL / URN: | urn:nbn:de:tuda-tuprints-92409 |
Zugehörige Links: | |
Herkunft: | Zweitveröffentlichung aus gefördertem Golden Open Access |
Kurzbeschreibung (Abstract): | An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α-divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α-divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems. |
URN: | urn:nbn:de:tuda-tuprints-92409 |
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Intelligente Autonome Systeme |
Hinterlegungsdatum: | 03 Nov 2019 20:57 |
Letzte Änderung: | 06 Dez 2023 07:03 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Verfügbare Versionen dieses Eintrags
- Entropic Regularization of Markov Decision Processes. (deposited 03 Nov 2019 20:57) [Gegenwärtig angezeigt]
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |