TU Darmstadt / ULB / TUbiblio

Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning

Chalvatzaki, Georgia ; Younes, Ali ; Nandha, Daljeet ; Le, An Thai ; Ribeiro, Leonardo F. R. ; Gurevych, Iryna (2023)
Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning.
In: Frontiers in Robotics and AI, 2023, 10
doi: 10.26083/tuprints-00024479
Artikel, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.

Typ des Eintrags: Artikel
Erschienen: 2023
Autor(en): Chalvatzaki, Georgia ; Younes, Ali ; Nandha, Daljeet ; Le, An Thai ; Ribeiro, Leonardo F. R. ; Gurevych, Iryna
Art des Eintrags: Zweitveröffentlichung
Titel: Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning
Sprache: Englisch
Publikationsjahr: 2023
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2023
Verlag: Frontiers Media S.A.
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Frontiers in Robotics and AI
Jahrgang/Volume einer Zeitschrift: 10
Kollation: 15 Seiten
DOI: 10.26083/tuprints-00024479
URL / URN: https://tuprints.ulb.tu-darmstadt.de/24479
Zugehörige Links:
Herkunft: Zweitveröffentlichung DeepGreen
Kurzbeschreibung (Abstract):

Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.

Freie Schlagworte: robot learning, task planning, grounding, language models (LMs), pretrained models, scene graphs
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-244795
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Zentrale Einrichtungen
Zentrale Einrichtungen > hessian.AI - Hessisches Zentrum für Künstliche Intelligenz
Hinterlegungsdatum: 11 Sep 2023 12:35
Letzte Änderung: 18 Sep 2023 13:37
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen