TU Darmstadt / ULB / TUbiblio

Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks

Tanneberg, Daniel ; Peters, Jan ; Rueckert, Elmar (2022)
Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks.
In: Neural Networks, 2022, 109
doi: 10.26083/tuprints-00020537
Artikel, Zweitveröffentlichung, Postprint

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal cognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.

Typ des Eintrags: Artikel
Erschienen: 2022
Autor(en): Tanneberg, Daniel ; Peters, Jan ; Rueckert, Elmar
Art des Eintrags: Zweitveröffentlichung
Titel: Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks
Sprache: Englisch
Publikationsjahr: 2022
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2022
Verlag: Elsevier
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Neural Networks
Jahrgang/Volume einer Zeitschrift: 109
Kollation: 18 Seiten
DOI: 10.26083/tuprints-00020537
URL / URN: https://tuprints.ulb.tu-darmstadt.de/20537
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal cognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.

Freie Schlagworte: Intrinsic motivation, Online learning, Experience replay, Autonomous robots, Spiking recurrent networks, Neural sampling
Status: Postprint
URN: urn:nbn:de:tuda-tuprints-205376
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
600 Technik, Medizin, angewandte Wissenschaften > 620 Ingenieurwissenschaften und Maschinenbau
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
TU-Projekte: EC/H2020|640554|SKILLS4ROBOTS
Hinterlegungsdatum: 18 Nov 2022 13:46
Letzte Änderung: 21 Nov 2022 10:21
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen