TU Darmstadt / ULB / TUbiblio

Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

Schmitt, Martin ; Ribeiro, Leonardo F. R. ; Dufter, Philipp ; Gurevych, Iryna ; Schütze, Hinrich (2021)
Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs.
15th Workshop on Graph-Based Natural Language Processing (TextGraphs-15). virtual Conference (11.06.2021-11.06.2021)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

We present Graformer, a novel Transformerbased encoder-decoder architecture for graphto-text generation. With our novel graph selfattention, the encoding of a node relies on all nodes in the input graph– not only direct neighbors– facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.1

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Schmitt, Martin ; Ribeiro, Leonardo F. R. ; Dufter, Philipp ; Gurevych, Iryna ; Schütze, Hinrich
Art des Eintrags: Bibliographie
Titel: Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs
Sprache: Englisch
Publikationsjahr: 3 Mai 2021
Kollation: 12 Seiten
Veranstaltungstitel: 15th Workshop on Graph-Based Natural Language Processing (TextGraphs-15)
Veranstaltungsort: virtual Conference
Veranstaltungsdatum: 11.06.2021-11.06.2021
URL / URN: https://arxiv.org/abs/2006.09242
Zugehörige Links:
Kurzbeschreibung (Abstract):

We present Graformer, a novel Transformerbased encoder-decoder architecture for graphto-text generation. With our novel graph selfattention, the encoding of a node relies on all nodes in the input graph– not only direct neighbors– facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.1

Zusätzliche Informationen:

Part of the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (06.-11.06.2021)

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Graduiertenkolleg 1994 Adaptive Informationsaufbereitung aus heterogenen Quellen
Hinterlegungsdatum: 04 Mai 2021 06:34
Letzte Änderung: 19 Dez 2024 10:24
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen