TU Darmstadt / ULB / TUbiblio

Inverse Reinforcement Learning via Nonparametric Spatio-Temporal Subgoal Modeling

Šošić, Adrian ; Rueckert, Elmar ; Peters, Jan ; Zoubir, Abdelhak M. ; Koeppl, Heinz (2024)
Inverse Reinforcement Learning via Nonparametric Spatio-Temporal Subgoal Modeling.
In: Journal of Machine Learning Research, 2018, 19 (69)
doi: 10.26083/tuprints-00026700
Artikel, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Advances in the field of inverse reinforcement learning (IRL) have led to sophisticated inference frameworks that relax the original modeling assumption of observing an agent behavior that reflects only a single intention. Instead of learning a global behavioral model, recent IRL methods divide the demonstration data into parts, to account for the fact that different trajectories may correspond to different intentions, e.g., because they were generated by different domain experts. In this work, we go one step further: using the intuitive concept of subgoals, we build upon the premise that even a single trajectory can be explained more efficiently locally within a certain context than globally, enabling a more compact representation of the observed behavior. Based on this assumption, we build an implicit intentional model of the agent's goals to forecast its behavior in unobserved situations. The result is an integrated Bayesian prediction framework that significantly outperforms existing IRL solutions and provides smooth policy estimates consistent with the expert's plan. Most notably, our framework naturally handles situations where the intentions of the agent change over time and classical IRL algorithms fail. In addition, due to its probabilistic nature, the model can be straightforwardly applied in active learning scenarios to guide the demonstration process of the expert.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Šošić, Adrian ; Rueckert, Elmar ; Peters, Jan ; Zoubir, Abdelhak M. ; Koeppl, Heinz
Art des Eintrags: Zweitveröffentlichung
Titel: Inverse Reinforcement Learning via Nonparametric Spatio-Temporal Subgoal Modeling
Sprache: Englisch
Publikationsjahr: 30 April 2024
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2018
Ort der Erstveröffentlichung: Brookline, Massachusetts
Verlag: Microtome Publishing
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Journal of Machine Learning Research
Jahrgang/Volume einer Zeitschrift: 19
(Heft-)Nummer: 69
Kollation: 45 Seiten
DOI: 10.26083/tuprints-00026700
URL / URN: https://tuprints.ulb.tu-darmstadt.de/26700
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

Advances in the field of inverse reinforcement learning (IRL) have led to sophisticated inference frameworks that relax the original modeling assumption of observing an agent behavior that reflects only a single intention. Instead of learning a global behavioral model, recent IRL methods divide the demonstration data into parts, to account for the fact that different trajectories may correspond to different intentions, e.g., because they were generated by different domain experts. In this work, we go one step further: using the intuitive concept of subgoals, we build upon the premise that even a single trajectory can be explained more efficiently locally within a certain context than globally, enabling a more compact representation of the observed behavior. Based on this assumption, we build an implicit intentional model of the agent's goals to forecast its behavior in unobserved situations. The result is an integrated Bayesian prediction framework that significantly outperforms existing IRL solutions and provides smooth policy estimates consistent with the expert's plan. Most notably, our framework naturally handles situations where the intentions of the agent change over time and classical IRL algorithms fail. In addition, due to its probabilistic nature, the model can be straightforwardly applied in active learning scenarios to guide the demonstration process of the expert.

Freie Schlagworte: Learning from Demonstration, Inverse Reinforcement Learning, Bayesian Nonparametric Modeling, Subgoal Inference, Graphical Models, Gibbs Sampling
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-267009
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
500 Naturwissenschaften und Mathematik > 570 Biowissenschaften, Biologie
600 Technik, Medizin, angewandte Wissenschaften > 621.3 Elektrotechnik, Elektronik
Fachbereich(e)/-gebiet(e): 18 Fachbereich Elektrotechnik und Informationstechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik > Bioinspirierte Kommunikationssysteme
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Self-Organizing Systems Lab
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Nachrichtentechnik > Signalverarbeitung
20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
Hinterlegungsdatum: 30 Apr 2024 09:17
Letzte Änderung: 13 Mai 2024 09:48
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen