TU Darmstadt / ULB / TUbiblio

Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning

Parisi, Simone ; Tateo, Davide ; Hensel, Maximilian ; D’Eramo, Carlo ; Peters, Jan ; Pajarinen, Joni (2022)
Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning.
In: Algorithms, 2022, 15 (3)
doi: 10.26083/tuprints-00021017
Artikel, Zweitveröffentlichung, Verlagsversion

Kurzbeschreibung (Abstract)

Reinforcement learning with sparse rewards is still an open challenge. Classic methods rely on getting feedback via extrinsic rewards to train the agent, and in situations where this occurs very rarely the agent learns slowly or cannot learn at all. Similarly, if the agent receives also rewards that create suboptimal modes of the objective function, it will likely prematurely stop exploring. More recent methods add auxiliary intrinsic rewards to encourage exploration. However, auxiliary rewards lead to a non-stationary target for the Q-function. In this paper, we present a novel approach that (1) plans exploration actions far into the future by using a long-term visitation count, and (2) decouples exploration and exploitation by learning a separate function assessing the exploration value of the actions. Contrary to existing methods that use models of reward and dynamics, our approach is off-policy and model-free. We further propose new tabular environments for benchmarking exploration in reinforcement learning. Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function. Results also suggest that our approach scales gracefully with the size of the environment.

Typ des Eintrags: Artikel
Erschienen: 2022
Autor(en): Parisi, Simone ; Tateo, Davide ; Hensel, Maximilian ; D’Eramo, Carlo ; Peters, Jan ; Pajarinen, Joni
Art des Eintrags: Zweitveröffentlichung
Titel: Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning
Sprache: Englisch
Publikationsjahr: 2022
Publikationsdatum der Erstveröffentlichung: 2022
Verlag: MDPI
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Algorithms
Jahrgang/Volume einer Zeitschrift: 15
(Heft-)Nummer: 3
Kollation: 44 Seiten
DOI: 10.26083/tuprints-00021017
URL / URN: https://tuprints.ulb.tu-darmstadt.de/21017
Zugehörige Links:
Herkunft: Zweitveröffentlichung DeepGreen
Kurzbeschreibung (Abstract):

Reinforcement learning with sparse rewards is still an open challenge. Classic methods rely on getting feedback via extrinsic rewards to train the agent, and in situations where this occurs very rarely the agent learns slowly or cannot learn at all. Similarly, if the agent receives also rewards that create suboptimal modes of the objective function, it will likely prematurely stop exploring. More recent methods add auxiliary intrinsic rewards to encourage exploration. However, auxiliary rewards lead to a non-stationary target for the Q-function. In this paper, we present a novel approach that (1) plans exploration actions far into the future by using a long-term visitation count, and (2) decouples exploration and exploitation by learning a separate function assessing the exploration value of the actions. Contrary to existing methods that use models of reward and dynamics, our approach is off-policy and model-free. We further propose new tabular environments for benchmarking exploration in reinforcement learning. Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function. Results also suggest that our approach scales gracefully with the size of the environment.

Freie Schlagworte: reinforcement learning, sparse reward, exploration, upper confidence bound, off-policy
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-210175
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
600 Technik, Medizin, angewandte Wissenschaften > 620 Ingenieurwissenschaften und Maschinenbau
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
Hinterlegungsdatum: 11 Apr 2022 11:11
Letzte Änderung: 12 Apr 2022 09:44
PPN:
Zugehörige Links:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen