TU Darmstadt / ULB / TUbiblio

DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs

Fang, Haishuo ; Zhu, Xiaodan ; Gurevych, Iryna (2024)
DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs.
62nd Annual Meeting of the Association for Computational Linguistics. Bangkok, Thailand (12.08.2024 -16.08.2024)
doi: 10.18653/v1/2024.findings-acl.203
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Answering Questions over Knowledge Graphs (KGQA) is key to well-functioning autonomous language agents in various real-life applications. To improve the neural-symbolic reasoning capabilities of language agents powered by Large Language Models (LLMs) in KGQA, we propose the Decomposition-Alignment-Reasoning Agent (DARA) framework. DARA effectively parses questions into formal queries through a dual mechanism: high-level iterative task decomposition and low-level task grounding. Importantly, DARA can be efficiently trained with a small number of high-quality reasoning trajectories. Our experimental results demonstrate that DARA fine-tuned on LLMs (e.g. Llama-2-7B, Mistral) outperforms both in-context learning-based agents with GPT-4 and alternative fine-tuned agents, across different benchmarks, making such models more accessible for real-life applications. We also show that DARA attains performance comparable to state-of-the-art enumerating-and-ranking-based methods for KGQA.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2024
Autor(en): Fang, Haishuo ; Zhu, Xiaodan ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs
Sprache: Englisch
Publikationsjahr: 17 August 2024
Verlag: ACL
Buchtitel: Findings of the Association for Computational Linguistics ACL 2024
Veranstaltungstitel: 62nd Annual Meeting of the Association for Computational Linguistics
Veranstaltungsort: Bangkok, Thailand
Veranstaltungsdatum: 12.08.2024 -16.08.2024
DOI: 10.18653/v1/2024.findings-acl.203
URL / URN: https://aclanthology.org/2024.findings-acl.203/
Kurzbeschreibung (Abstract):

Answering Questions over Knowledge Graphs (KGQA) is key to well-functioning autonomous language agents in various real-life applications. To improve the neural-symbolic reasoning capabilities of language agents powered by Large Language Models (LLMs) in KGQA, we propose the Decomposition-Alignment-Reasoning Agent (DARA) framework. DARA effectively parses questions into formal queries through a dual mechanism: high-level iterative task decomposition and low-level task grounding. Importantly, DARA can be efficiently trained with a small number of high-quality reasoning trajectories. Our experimental results demonstrate that DARA fine-tuned on LLMs (e.g. Llama-2-7B, Mistral) outperforms both in-context learning-based agents with GPT-4 and alternative fine-tuned agents, across different benchmarks, making such models more accessible for real-life applications. We also show that DARA attains performance comparable to state-of-the-art enumerating-and-ranking-based methods for KGQA.

Freie Schlagworte: UKP_p_eliza
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 27 Aug 2024 13:16
Letzte Änderung: 03 Dez 2024 13:56
PPN: 524362831
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen