TU Darmstadt / ULB / TUbiblio

Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals

Tanneberg, Daniel ; Peters, Jan ; Rueckert, Elmar (2022)
Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals.
CoRL2017 - Conference on Robot Learning 2017. Mountain View, California (13.11.2017-15.11.2017)
doi: 10.26083/tuprints-00020580
Konferenzveröffentlichung, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Continuous online adaptation is an essential ability for the vision of fully autonomous and lifelong-learning robots. Robots need to be able to adapt to changing environments and constraints while this adaption should be performed without interrupting the robot’s motion. In this paper, we introduce a framework for probabilistic online motion planning and learning based on a bio-inspired stochastic recurrent neural network. Furthermore, we show that the model can adapt online and sample-efficiently using intrinsic motivation signals and a mental replay strategy. This fast adaptation behavior allows the robot to learn from only a small number of physical interactions and is a promising feature for reusing the model in different environments. We evaluate the online planning with a realistic dynamic simulation of the KUKA LWR robotic arm. The efficient online adaptation is shown in simulation by learning an unknown workspace constraint using mental replay and cognitive dissonance as intrinsic motivation signal.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Tanneberg, Daniel ; Peters, Jan ; Rueckert, Elmar
Art des Eintrags: Zweitveröffentlichung
Titel: Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals
Sprache: Englisch
Publikationsjahr: 2022
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2022
Verlag: PMLR
Buchtitel: Proceedings of the 1st Annual Conference on Robot Learning
Reihe: Proceedings of Machine Learning Research
Band einer Reihe: 78
Kollation: 8 Seiten
Veranstaltungstitel: CoRL2017 - Conference on Robot Learning 2017
Veranstaltungsort: Mountain View, California
Veranstaltungsdatum: 13.11.2017-15.11.2017
DOI: 10.26083/tuprints-00020580
URL / URN: https://tuprints.ulb.tu-darmstadt.de/20580
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

Continuous online adaptation is an essential ability for the vision of fully autonomous and lifelong-learning robots. Robots need to be able to adapt to changing environments and constraints while this adaption should be performed without interrupting the robot’s motion. In this paper, we introduce a framework for probabilistic online motion planning and learning based on a bio-inspired stochastic recurrent neural network. Furthermore, we show that the model can adapt online and sample-efficiently using intrinsic motivation signals and a mental replay strategy. This fast adaptation behavior allows the robot to learn from only a small number of physical interactions and is a promising feature for reusing the model in different environments. We evaluate the online planning with a realistic dynamic simulation of the KUKA LWR robotic arm. The efficient online adaptation is shown in simulation by learning an unknown workspace constraint using mental replay and cognitive dissonance as intrinsic motivation signal.

Freie Schlagworte: Lifelong-learning, Intrinsic Motivation, Recurrent Neural Networks
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-205803
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
TU-Projekte: EC/H2020|640554|SKILLS4ROBOTS
Hinterlegungsdatum: 18 Nov 2022 14:30
Letzte Änderung: 21 Nov 2022 10:43
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen