TU Darmstadt / ULB / TUbiblio

f-Divergence constrained policy improvement

Belousov, Boris ; Peters, Jan (2023)
f-Divergence constrained policy improvement.
doi: 10.26083/tuprints-00020553
Report, Zweitveröffentlichung, Preprint

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

To ensure stability of learning, state-of-the-art generalized policy iteration algorithms augment the policy improvement step with a trust region constraint bounding the information loss. The size of the trust region is commonly determined by the Kullback-Leibler (KL) divergence, which not only captures the notion of distance well but also yields closed-form solutions. In this paper, we consider a more general class of f-divergences and derive the corresponding policy update rules. The generic solution is expressed through the derivative of the convex conjugate function to f and includes the KL solution as a special case. Within the class of f-divergences, we further focus on a one-parameter family of α-divergences to study effects of the choice of divergence on policy improvement. Previously known as well as new policy updates emerge for different values of α. We show that every type of policy update comes with a compatible policy evaluation resulting from the chosen f-divergence. Interestingly, the mean-squared Bellman error minimization is closely related to policy evaluation with the Pearson χ²-divergence penalty, while the KL divergence results in the soft-max policy update and a log-sum-exp critic. We carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of using different divergence functions on a multi-armed bandit problem and on common standard reinforcement learning problems.

Typ des Eintrags: Report
Erschienen: 2023
Autor(en): Belousov, Boris ; Peters, Jan
Art des Eintrags: Zweitveröffentlichung
Titel: f-Divergence constrained policy improvement
Sprache: Englisch
Publikationsjahr: 17 Oktober 2023
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2017
Kollation: 20 Seiten
DOI: 10.26083/tuprints-00020553
URL / URN: https://tuprints.ulb.tu-darmstadt.de/20553
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

To ensure stability of learning, state-of-the-art generalized policy iteration algorithms augment the policy improvement step with a trust region constraint bounding the information loss. The size of the trust region is commonly determined by the Kullback-Leibler (KL) divergence, which not only captures the notion of distance well but also yields closed-form solutions. In this paper, we consider a more general class of f-divergences and derive the corresponding policy update rules. The generic solution is expressed through the derivative of the convex conjugate function to f and includes the KL solution as a special case. Within the class of f-divergences, we further focus on a one-parameter family of α-divergences to study effects of the choice of divergence on policy improvement. Previously known as well as new policy updates emerge for different values of α. We show that every type of policy update comes with a compatible policy evaluation resulting from the chosen f-divergence. Interestingly, the mean-squared Bellman error minimization is closely related to policy evaluation with the Pearson χ²-divergence penalty, while the KL divergence results in the soft-max policy update and a log-sum-exp critic. We carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of using different divergence functions on a multi-armed bandit problem and on common standard reinforcement learning problems.

Freie Schlagworte: Reinforcement Learning, Policy Search, Bandit Problems
Status: Preprint
URN: urn:nbn:de:tuda-tuprints-205534
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
TU-Projekte: EC/H2020|640554|SKILLS4ROBOTS
Hinterlegungsdatum: 17 Okt 2023 15:10
Letzte Änderung: 19 Okt 2023 10:48
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen