TU Darmstadt / ULB / TUbiblio

Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future

Klie, Jan-Christoph ; Webber, Bonnie ; Gurevych, Iryna (2022)
Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future.
In: Computational Linguistics
doi: 10.1162/coli_a_00464
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Annotated data is an essential ingredient in natural language processing for training and evaluating machine learning models. It is therefore very desirable for the annotations to be of high quality. Recent work, however, has shown that several popular datasets contain a surprising amount of annotation errors or inconsistencies. To alleviate this issue, many methods for annotation error detection have been devised over the years. While researchers show that their approaches work well on their newly introduced datasets, they rarely compare their methods to previous work or on the same datasets. This raises strong concerns on methods’ general performance and makes it difficult to asses their strengths and weaknesses. We therefore reimplement 18 methods for detecting potential annotation errors and evaluate them on 9 English datasets for text classification as well as token and span labeling. In addition, we define a uniform evaluation setup including a new formalization of the annotation error detection task, evaluation protocol and general best practices. To facilitate future research and reproducibility, we release our datasets and implementations in an easy-to-use and open source software package.

Typ des Eintrags: Artikel
Erschienen: 2022
Autor(en): Klie, Jan-Christoph ; Webber, Bonnie ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future
Sprache: Englisch
Publikationsjahr: 6 Oktober 2022
Verlag: MIT Press
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Computational Linguistics
Kollation: 42 S.
DOI: 10.1162/coli_a_00464
Kurzbeschreibung (Abstract):

Annotated data is an essential ingredient in natural language processing for training and evaluating machine learning models. It is therefore very desirable for the annotations to be of high quality. Recent work, however, has shown that several popular datasets contain a surprising amount of annotation errors or inconsistencies. To alleviate this issue, many methods for annotation error detection have been devised over the years. While researchers show that their approaches work well on their newly introduced datasets, they rarely compare their methods to previous work or on the same datasets. This raises strong concerns on methods’ general performance and makes it difficult to asses their strengths and weaknesses. We therefore reimplement 18 methods for detecting potential annotation errors and evaluate them on 9 English datasets for text classification as well as token and span labeling. In addition, we define a uniform evaluation setup including a new formalization of the annotation error detection task, evaluation protocol and general best practices. To facilitate future research and reproducibility, we release our datasets and implementations in an easy-to-use and open source software package.

Freie Schlagworte: UKP_p_INCEpTION
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 13 Okt 2022 07:05
Letzte Änderung: 21 Feb 2023 11:13
PPN: 505193639
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen