TU Darmstadt / ULB / TUbiblio

Receding Horizon Curiosity

Schultheis, Matthias ; Belousov, Boris ; Abdulsamad, Hany ; Peters, Jan (2022)
Receding Horizon Curiosity.
3rd Conference on Robot Learning (CoRL 2019). Osaka, Japan (30.10.- 1.11.2019)
Konferenzveröffentlichung, Bibliographie

Dies ist die neueste Version dieses Eintrags.

Kurzbeschreibung (Abstract)

Sample-efficient exploration is crucial not only for discovering rewarding experiences but also for adapting to environment changes in a task-agnostic fashion. A principled treatment of the problem of optimal input synthesis for system identification is provided within the framework of sequential Bayesian experimental design. In this paper, we present an effective trajectory-optimization-based approximate solution of this otherwise intractable problem that models optimal exploration in an unknown Markov decision process (MDP). By interleaving episodic exploration with Bayesian nonlinear system identification, our algorithm takes advantage of the inductive bias to explore in a directed manner, without assuming prior knowledge of the MDP. Empirical evaluations indicate a clear advantage of the proposed algorithm in terms of the rate of convergence and the final model fidelity when compared to intrinsic-motivation-based algorithms employing exploration bonuses such as prediction error and information gain. Moreover, our method maintains a computational advantage over a recent model-based active exploration (MAX) algorithm, by focusing on the information gain along trajectories instead of seeking a global exploration policy. A reference implementation of our algorithm and the conducted experiments is publicly available.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Schultheis, Matthias ; Belousov, Boris ; Abdulsamad, Hany ; Peters, Jan
Art des Eintrags: Bibliographie
Titel: Receding Horizon Curiosity
Sprache: Englisch
Publikationsjahr: 2022
Ort: Darmstadt
Verlag: PMLR
Reihe: Proceedings of Machine Learning Research
Band einer Reihe: 100
Kollation: 11 Seiten
Veranstaltungstitel: 3rd Conference on Robot Learning (CoRL 2019)
Veranstaltungsort: Osaka, Japan
Veranstaltungsdatum: 30.10.- 1.11.2019
Zugehörige Links:
Kurzbeschreibung (Abstract):

Sample-efficient exploration is crucial not only for discovering rewarding experiences but also for adapting to environment changes in a task-agnostic fashion. A principled treatment of the problem of optimal input synthesis for system identification is provided within the framework of sequential Bayesian experimental design. In this paper, we present an effective trajectory-optimization-based approximate solution of this otherwise intractable problem that models optimal exploration in an unknown Markov decision process (MDP). By interleaving episodic exploration with Bayesian nonlinear system identification, our algorithm takes advantage of the inductive bias to explore in a directed manner, without assuming prior knowledge of the MDP. Empirical evaluations indicate a clear advantage of the proposed algorithm in terms of the rate of convergence and the final model fidelity when compared to intrinsic-motivation-based algorithms employing exploration bonuses such as prediction error and information gain. Moreover, our method maintains a computational advantage over a recent model-based active exploration (MAX) algorithm, by focusing on the information gain along trajectories instead of seeking a global exploration policy. A reference implementation of our algorithm and the conducted experiments is publicly available.

Freie Schlagworte: Bayesian exploration, artificial curiosity, model predictive control
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
TU-Projekte: EC/H2020|640554|SKILLS4ROBOTS
Hinterlegungsdatum: 02 Jul 2024 23:13
Letzte Änderung: 02 Jul 2024 23:14
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen