TU Darmstadt / ULB / TUbiblio

Analyzing Dataset Annotation Quality Management in the Wild

Klie, Jan-Christoph ; Castilho, Richard Eckart de ; Gurevych, Iryna (2024)
Analyzing Dataset Annotation Quality Management in the Wild.
In: Computational Linguistics
doi: 10.1162/coli_a_00516
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Data quality is crucial for training accurate, unbiased, and trustworthy machine learning models as well as for their correct evaluation. Recent works, however, have shown that even popular datasets used to train and evaluate state-of-the-art models contain a non-negligible amount of erroneous annotations, biases, or artifacts. While practices and guidelines regarding dataset creation projects exist, to our knowledge, large-scale analysis has yet to be performed on how quality management is conducted when creating natural language datasets and whether these recommendations are followed. Therefore, we first survey and summarize recommended quality management practices for dataset creation as described in the literature and provide suggestions for applying them. Then, we compile a corpus of 591 scientific publications introducing text datasets and annotate it for quality-related aspects, such as annotator management, agreement, adjudication, or data validation. Using these annotations, we then analyze how quality management is conducted in practice. A majority of the annotated publications apply good or excellent quality management. However, we deem the effort of 30\\% of the works as only subpar. Our analysis also shows common errors, especially when using inter-annotator agreement and computing annotation error rates.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Klie, Jan-Christoph ; Castilho, Richard Eckart de ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: Analyzing Dataset Annotation Quality Management in the Wild
Sprache: Englisch
Publikationsjahr: März 2024
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Computational Linguistics
DOI: 10.1162/coli_a_00516
URL / URN: https://doi.org/10.1162/coli_a_00516
Kurzbeschreibung (Abstract):

Data quality is crucial for training accurate, unbiased, and trustworthy machine learning models as well as for their correct evaluation. Recent works, however, have shown that even popular datasets used to train and evaluate state-of-the-art models contain a non-negligible amount of erroneous annotations, biases, or artifacts. While practices and guidelines regarding dataset creation projects exist, to our knowledge, large-scale analysis has yet to be performed on how quality management is conducted when creating natural language datasets and whether these recommendations are followed. Therefore, we first survey and summarize recommended quality management practices for dataset creation as described in the literature and provide suggestions for applying them. Then, we compile a corpus of 591 scientific publications introducing text datasets and annotate it for quality-related aspects, such as annotator management, agreement, adjudication, or data validation. Using these annotations, we then analyze how quality management is conducted in practice. A majority of the annotated publications apply good or excellent quality management. However, we deem the effort of 30\\% of the works as only subpar. Our analysis also shows common errors, especially when using inter-annotator agreement and computing annotation error rates.

Freie Schlagworte: UKP_p_EVIDENCE,UKP_p_PEER
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 24 Jun 2024 10:00
Letzte Änderung: 25 Jun 2024 08:34
PPN: 51936175X
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen