TU Darmstadt / ULB / TUbiblio

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

Moosavi, Nafise Sadat ; de Boer, Marcel ; Utama, Prasetya ; Gurevych, Iryna (2020)
Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures.
doi: 10.48550/arXiv.2010.12510
Report, Bibliographie

Kurzbeschreibung (Abstract)

Existing NLP datasets contain various biases, and models tend to quickly learn those biases, which in turn limits their robustness. Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples. Besides, they mostly focus on addressing a specific bias, and while they improve the performance on adversarial evaluation sets of the targeted bias, they may bias the model in other ways, and therefore, hurt the overall robustness. In this paper, we propose to augment the input sentences in the training data with their corresponding predicate-argument structures, which provide a higher-level abstraction over different realizations of the same meaning and help the model to recognize important parts of sentences. We show that without targeting a specific bias, our sentence augmentation improves the robustness of transformer models against multiple biases. In addition, we show that models can still be vulnerable to the lexical overlap bias, even when the training data does not contain this bias, and that the sentence augmentation also improves the robustness in this scenario

Typ des Eintrags: Report
Erschienen: 2020
Autor(en): Moosavi, Nafise Sadat ; de Boer, Marcel ; Utama, Prasetya ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures
Sprache: Englisch
Publikationsjahr: 23 Oktober 2020
Verlag: arXiv
Reihe: Computation and Language
Auflage: 1. Version
DOI: 10.48550/arXiv.2010.12510
URL / URN: https://arxiv.org/abs/2010.12510
Kurzbeschreibung (Abstract):

Existing NLP datasets contain various biases, and models tend to quickly learn those biases, which in turn limits their robustness. Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples. Besides, they mostly focus on addressing a specific bias, and while they improve the performance on adversarial evaluation sets of the targeted bias, they may bias the model in other ways, and therefore, hurt the overall robustness. In this paper, we propose to augment the input sentences in the training data with their corresponding predicate-argument structures, which provide a higher-level abstraction over different realizations of the same meaning and help the model to recognize important parts of sentences. We show that without targeting a specific bias, our sentence augmentation improves the robustness of transformer models against multiple biases. In addition, we show that models can still be vulnerable to the lexical overlap bias, even when the training data does not contain this bias, and that the sentence augmentation also improves the robustness in this scenario

Freie Schlagworte: UKP_p_crisp_senpai
Zusätzliche Informationen:

Preprint

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Graduiertenkolleg 1994 Adaptive Informationsaufbereitung aus heterogenen Quellen
Hinterlegungsdatum: 15 Mär 2021 12:14
Letzte Änderung: 11 Jul 2024 09:57
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen