Chalvatzaki, Georgia ; Younes, Ali ; Nandha, Daljeet ; Le, An Thai ; Ribeiro, Leonardo F. R. ; Gurevych, Iryna (2023)
Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning.
In: Frontiers in Robotics and AI, 10
doi: 10.3389/frobt.2023.1221739
Artikel, Bibliographie
Dies ist die neueste Version dieses Eintrags.
Kurzbeschreibung (Abstract)
Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.
Typ des Eintrags: | Artikel |
---|---|
Erschienen: | 2023 |
Autor(en): | Chalvatzaki, Georgia ; Younes, Ali ; Nandha, Daljeet ; Le, An Thai ; Ribeiro, Leonardo F. R. ; Gurevych, Iryna |
Art des Eintrags: | Bibliographie |
Titel: | Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning |
Sprache: | Englisch |
Publikationsjahr: | 2023 |
Ort: | Darmstadt |
Verlag: | Frontiers Media S.A. |
Titel der Zeitschrift, Zeitung oder Schriftenreihe: | Frontiers in Robotics and AI |
Jahrgang/Volume einer Zeitschrift: | 10 |
Kollation: | 15 Seiten |
DOI: | 10.3389/frobt.2023.1221739 |
Zugehörige Links: | |
Kurzbeschreibung (Abstract): | Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics. |
Freie Schlagworte: | robot learning, task planning, grounding, language models (LMs), pretrained models, scene graphs |
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung Zentrale Einrichtungen Zentrale Einrichtungen > hessian.AI - Hessisches Zentrum für Künstliche Intelligenz |
Hinterlegungsdatum: | 02 Aug 2024 12:55 |
Letzte Änderung: | 02 Aug 2024 12:55 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Verfügbare Versionen dieses Eintrags
-
Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning. (deposited 11 Sep 2023 12:35)
- Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning. (deposited 02 Aug 2024 12:55) [Gegenwärtig angezeigt]
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |